What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
An organization has outsourced its financial transaction processing to a Cloud Service Provider (CSP) who will provide them with Software as a Service (SaaS). If there was a data breach who is responsible for monetary losses?
The Data Protection Authority (DPA)
The Cloud Service Provider (CSP)
The application developers
The data owner
The data owner is the person who has the authority and responsibility for the data stored, processed, or transmitted by an Information System (IS). The data owner is responsible for the monetary losses if there was a data breach, as the data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The Data Protection Authority (DPA) is not responsible for the monetary losses, but for the enforcement of the data protection laws and regulations. The Cloud Service Provider (CSP) is not responsible for the monetary losses, but for the provision of the cloud services and the protection of the cloud infrastructure. The application developers are not responsible for the monetary losses, but for the development and maintenance of the software applications. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is the BEST Identity-as-a-Service (IDaaS) solution for validating users?
Single Sign-On (SSO)
Security Assertion Markup Language (SAML)
Lightweight Directory Access Protocol (LDAP)
Open Authentication (OAuth)
The best Identity-as-a-Service (IDaaS) solution for validating users is Security Assertion Markup Language (SAML). IDaaS is a cloud-based service that provides identity and access management functions, such as authentication, authorization, and provisioning, to the customers. SAML is a standard protocol that enables the exchange of authentication and authorization information between different parties, such as the identity provider, the service provider, and the user. SAML can help to validate users in an IDaaS solution, as it can allow the users to access multiple cloud services with a single sign-on, and provide the service providers with the necessary identity and attribute assertions about the users. Single Sign-On (SSO), Lightweight Directory Access Protocol (LDAP), and Open Authentication (OAuth) are not IDaaS solutions, but technologies or protocols that can be used or supported by IDaaS solutions, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 654; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 437.
What does a Synchronous (SYN) flood attack do?
Forces Transmission Control Protocol /Internet Protocol (TCP/IP) connections into a reset state
Establishes many new Transmission Control Protocol / Internet Protocol (TCP/IP) connections
Empties the queue of pending Transmission Control Protocol /Internet Protocol (TCP/IP) requests
Exceeds the limits for new Transmission Control Protocol /Internet Protocol (TCP/IP) connections
A SYN flood attack does exceed the limits for new TCP/IP connections. A SYN flood attack is a type of denial-of-service attack that sends a large number of SYN packets to a server, without completing the TCP three-way handshake. The server allocates resources for each SYN packet and waits for the final ACK packet, which never arrives. This consumes the server’s memory and processing power, and prevents it from accepting new legitimate connections. The other options are not accurate descriptions of what a SYN flood attack does. References: SYN flood - Wikipedia; SYN flood DDoS attack | Cloudflare.
Extensible Authentication Protocol-Message Digest 5 (EAP-MD5) only provides which of the following?
Mutual authentication
Server authentication
User authentication
Streaming ciphertext data
Extensible Authentication Protocol-Message Digest 5 (EAP-MD5) is a type of EAP method that uses the MD5 hashing algorithm to provide user authentication. EAP is a framework that allows different authentication methods to be used in network access scenarios, such as wireless, VPN, or dial-up. EAP-MD5 only provides user authentication, which means that it verifies the identity of the user who is requesting access to the network, but not the identity of the network server who is granting access. Therefore, EAP-MD5 does not provide mutual authentication, server authentication, or streaming ciphertext data. EAP-MD5 is considered insecure and vulnerable to various attacks, such as offline dictionary attacks, man-in-the-middle attacks, or replay attacks, and should not be used in modern networks.
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
Which security modes is MOST commonly used in a commercial environment because it protects the integrity
of financial and accounting data?
Biba
Graham-Denning
Clark-Wilson
Beil-LaPadula
The security mode that is most commonly used in a commercial environment because it protects the integrity of financial and accounting data is Clark-Wilson. A security mode is a formal model or framework that defines the rules and principles for implementing and enforcing security policies and controls on a system or a network. A security mode can be based on various criteria or objectives, such as confidentiality, integrity, availability, or accountability. Clark-Wilson is a security mode that focuses on the integrity of data and transactions, and is designed to prevent unauthorized or improper modifications or tampering of data. Clark-Wilson is based on the concept of separation of duties, which requires that different roles or functions are assigned to different parties, and that no single party can perform all the steps of a transaction or a process. Clark-Wilson also involves the concept of well-formed transactions, which requires that all the transactions or operations on data are consistent, complete, and verifiable, and that they preserve the state and the validity of the data. Clark-Wilson can provide some benefits for security, such as enhancing the accuracy and reliability of the data and the transactions, preventing fraud or errors, and supporting the audit and compliance activities. Clark-Wilson is most commonly used in a commercial environment because it protects the integrity of financial and accounting data, which are critical and sensitive for the business operations and performance of the organization. Clark-Wilson can help to ensure that the financial and accounting data are accurate, consistent, and valid, and that they reflect the true and fair view of the financial position and results of the organization. Clark-Wilson can also help to prevent or detect any unauthorized or improper modifications or tampering of the financial and accounting data, such as embezzlement, falsification, or manipulation, which may cause financial losses or legal liabilities for the organization. Biba, Graham-Denning, and Beil-LaPadula are not the security modes that are most commonly used in a commercial environment because they protect the integrity of financial and accounting data, although they may be related or useful security modes. Biba is a security mode that focuses on the integrity of data and transactions, and is designed to prevent unauthorized or improper modifications or tampering of data. Biba is based on the concept of no read down and no write up, which requires that a subject can only read data of lower or equal integrity level, and can only write data of higher or equal integrity level. Biba can provide some benefits for security, such as enhancing the accuracy and reliability of the data and the transactions, preventing corruption or contamination, and supporting the audit and compliance activities. However, Biba is not the security mode that is most commonly used in a commercial environment
In an organization where Network Access Control (NAC) has been deployed, a device trying to connect to the network is being placed into an isolated domain. What could be done on this device in order to obtain proper
connectivity?
Connect the device to another network jack
Apply remediation’s according to security requirements
Apply Operating System (OS) patches
Change the Message Authentication Code (MAC) address of the network interface
Network Access Control (NAC) is a technology that enforces security policies and controls on the devices that attempt to access a network. NAC can verify the identity and compliance of the devices, and grant or deny access based on predefined rules and criteria. NAC can also place the devices into different domains or segments, depending on their security posture and role. One of the domains that NAC can create is the isolated domain, which is a restricted network segment that isolates the devices that do not meet the security requirements or pose a potential threat to the network. The devices in the isolated domain have limited or no access to the network resources, and are subject to remediation actions. Remediation is the process of fixing or improving the security status of the devices, by applying the necessary updates, patches, configurations, or software. Remediation can be performed automatically by the NAC system, or manually by the device owner or administrator. Therefore, the best thing that can be done on a device that is placed into an isolated domain by NAC is to apply remediation’s according to the security requirements, which can restore the device’s compliance and enable it to access the network normally.
After following the processes defined within the change management plan, a super user has upgraded a
device within an Information system.
What step would be taken to ensure that the upgrade did NOT affect the network security posture?
Conduct an Assessment and Authorization (A&A)
Conduct a security impact analysis
Review the results of the most recent vulnerability scan
Conduct a gap analysis with the baseline configuration
A security impact analysis is a process of assessing the potential effects of a change on the security posture of a system. It helps to identify and mitigate any security risks that may arise from the change, such as new vulnerabilities, configuration errors, or compliance issues. A security impact analysis should be conducted after following the change management plan and before implementing the change in the production environment. Conducting an A&A, reviewing the results of a vulnerability scan, or conducting a gap analysis with the baseline configuration are also possible steps to ensure the security of a system, but they are not specific to the change management process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 961; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1013.
Attack trees are MOST useful for which of the following?
Determining system security scopes
Generating attack libraries
Enumerating threats
Evaluating Denial of Service (DoS) attacks
Attack trees are most useful for enumerating threats. Attack trees are graphical models that represent the possible ways that an attacker can exploit a system or achieve a goal. Attack trees consist of nodes that represent the attacker’s actions or conditions, and branches that represent the logical relationships between the nodes. Attack trees can help to enumerate the threats that the system faces, as well as to analyze the likelihood, impact, and countermeasures of each threat. Attack trees are not useful for determining system security scopes, generating attack libraries, or evaluating DoS attacks, although they may be used as inputs or outputs for these tasks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 499; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 552.
Which of the following is the MOST effective method to mitigate Cross-Site Scripting (XSS) attacks?
Use Software as a Service (SaaS)
Whitelist input validation
Require client certificates
Validate data output
The most effective method to mitigate Cross-Site Scripting (XSS) attacks is to use whitelist input validation. XSS attacks occur when an attacker injects malicious code, usually in the form of a script, into a web application that is then executed by the browser of an unsuspecting user. XSS attacks can compromise the confidentiality, integrity, and availability of the web application and the user’s data. Whitelist input validation is a technique that checks the user input against a predefined set of acceptable values or characters, and rejects any input that does not match the whitelist. Whitelist input validation can prevent XSS attacks by filtering out any malicious or unexpected input that may contain harmful scripts. Whitelist input validation should be applied at the point of entry of the user input, and should be combined with output encoding or sanitization to ensure that any input that is displayed back to the user is safe and harmless. Use Software as a Service (SaaS), require client certificates, and validate data output are not the most effective methods to mitigate XSS attacks, although they may be related or useful techniques. Use Software as a Service (SaaS) is a model that delivers software applications over the Internet, usually on a subscription or pay-per-use basis. SaaS can provide some benefits for web security, such as reducing the attack surface, outsourcing the maintenance and patching of the software, and leveraging the expertise and resources of the service provider. However, SaaS does not directly address the issue of XSS attacks, as the service provider may still have vulnerabilities or flaws in their web applications that can be exploited by XSS attackers. Require client certificates is a technique that uses digital certificates to authenticate the identity of the clients who access a web application. Client certificates are issued by a trusted certificate authority (CA), and contain the public key and other information of the client. Client certificates can provide some benefits for web security, such as enhancing the confidentiality and integrity of the communication, preventing unauthorized access, and enabling mutual authentication. However, client certificates do not directly address the issue of XSS attacks, as the client may still be vulnerable to XSS attacks if the web application does not properly validate and encode the user input. Validate data output is a technique that checks the data that is sent from the web application to the client browser, and ensures that it is correct, consistent, and safe. Validate data output can provide some benefits for web security, such as detecting and correcting any errors or anomalies in the data, preventing data leakage or corruption, and enhancing the quality and reliability of the web application. However, validate data output is not sufficient to prevent XSS attacks, as the data output may still contain malicious scripts that can be executed by the client browser. Validate data output should be complemented with output encoding or sanitization to ensure that any data output that is displayed to the user is safe and harmless.
Which of the following MUST be scalable to address security concerns raised by the integration of third-party
identity services?
Mandatory Access Controls (MAC)
Enterprise security architecture
Enterprise security procedures
Role Based Access Controls (RBAC)
Enterprise security architecture is the framework that defines the security policies, standards, guidelines, and controls that govern the security of an organization’s information systems and assets. Enterprise security architecture must be scalable to address the security concerns raised by the integration of third-party identity services, such as Identity as a Service (IDaaS) or federated identity management. Scalability means that the enterprise security architecture can accommodate the increased complexity, diversity, and volume of identity and access management transactions and interactions that result from the integration of external identity providers and consumers. Scalability also means that the enterprise security architecture can adapt to the changing security requirements and threats that may arise from the integration of third-party identity services.
A company receives an email threat informing of an Imminent Distributed Denial of Service (DDoS) attack
targeting its web application, unless ransom is paid. Which of the following techniques BEST addresses that threat?
Deploying load balancers to distribute inbound traffic across multiple data centers
Set Up Web Application Firewalls (WAFs) to filter out malicious traffic
Implementing reverse web-proxies to validate each new inbound connection
Coordinate with and utilize capabilities within Internet Service Provider (ISP)
The best technique to address the threat of an imminent DDoS attack targeting a web application is to coordinate with and utilize the capabilities within the ISP. A DDoS attack is a malicious attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. A DDoS attack can cause severe damage to the availability, performance, and reputation of the web application, as well as incur financial losses and legal liabilities. Therefore, it is important to have a DDoS mitigation strategy in place to prevent or minimize the impact of such attacks. One of the most effective ways to mitigate DDoS attacks is to leverage the capabilities of the ISP, as they have more resources, bandwidth, and expertise to handle large volumes of traffic and filter out malicious packets. The ISP can also provide additional services such as traffic monitoring, alerting, reporting, and analysis, as well as assist with the investigation and prosecution of the attackers. The ISP can also work with other ISPs and network operators to coordinate the response and share information about the attack. The other options are not the best techniques to address the threat of an imminent DDoS attack, as they may not be sufficient, timely, or scalable to handle the attack. Deploying load balancers, setting up web application firewalls, and implementing reverse web-proxies are some of the measures that can be taken at the application level to improve the resilience and security of the web application, but they may not be able to cope with the magnitude and complexity of a DDoS attack, especially if the attack targets the network layer or the infrastructure layer. Moreover, these measures may require more time, cost, and effort to implement and maintain, and may not be feasible to deploy in a short notice. References: What is a distributed denial-of-service (DDoS) attack?; What is a DDoS Attack? DDoS Meaning, Definition & Types | Fortinet; Denial-of-service attack - Wikipedia.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement.
Given the number of priorities, which of the following will MOST likely influence the selection of top initiatives?
Severity of risk
Complexity of strategy
Frequency of incidents
Ongoing awareness
The most likely factor that will influence the selection of top initiatives is the severity of risk. The severity of risk is a measure of the impact or the consequence of a threat exploiting a vulnerability, and the likelihood or the probability of that occurrence. The severity of risk can help to prioritize the security initiatives, as it can indicate the level of urgency or importance of addressing or mitigating the risk, and the potential benefit or value of implementing the initiative. The security initiatives that have the highest severity of risk should be selected as the top initiatives, as they can provide the most protection or improvement for the security program. Complexity of strategy, frequency of incidents, and ongoing awareness are not the most likely factors that will influence the selection of top initiatives, as they are related to the difficulty, the occurrence, or the education of the security program, not the prioritization or the justification of the security initiatives. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 40.
Which of the following provides effective management assurance for a Wireless Local Area Network (WLAN)?
Maintaining an inventory of authorized Access Points (AP) and connecting devices
Setting the radio frequency to the minimum range required
Establishing a Virtual Private Network (VPN) tunnel between the WLAN client device and a VPN concentrator
Verifying that all default passwords have been changed
The action that provides effective management assurance for a WLAN is establishing a VPN tunnel between the WLAN client device and a VPN concentrator. A VPN is a secure and encrypted connection that enables remote access to a private network over a public network, such as the internet. A VPN concentrator is a device that manages and authenticates the VPN connections, and provides encryption and decryption services. By establishing a VPN tunnel, the organization can protect the confidentiality, integrity, and availability of the data transmitted over the WLAN, and prevent unauthorized or malicious access to the network. The other options are not as effective as establishing a VPN tunnel, as they either do not provide sufficient security for the WLAN (A and B), or do not address the management assurance aspect (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 167; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 177.
What do Capability Maturity Models (CMM) serve as a benchmark for in an organization?
Experience in the industry
Definition of security profiles
Human resource planning efforts
Procedures in systems development
Capability Maturity Models (CMM) are frameworks that describe the key elements of effective processes for various domains, such as software engineering, project management, or information security. CMM serve as a benchmark for an organization to assess its current level of maturity and identify the areas for improvement. CMM can help an organization to establish, standardize, measure, control, and optimize its procedures in systems development, which is the process of creating, maintaining, and enhancing information systems. Experience in the industry, definition of security profiles, and human resource planning efforts are not the main focus of CMM, although they may be influenced by the maturity level of the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1034. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1060.
When is security personnel involvement in the Systems Development Life Cycle (SDLC) process MOST beneficial?
Testing phase
Development phase
Requirements definition phase
Operations and maintenance phase
The most beneficial phase for security personnel involvement in the Systems Development Life Cycle (SDLC) process is the requirements definition phase. This is the phase where the security personnel can identify and analyze the security needs, objectives, and constraints of the system, and define the security requirements and specifications that the system must meet. By involving security personnel in this phase, the organization can ensure that security is integrated into the system design from the beginning, and avoid costly or complex changes or fixes later in the SDLC process. The other options are not as beneficial as the requirements definition phase, as they either involve security personnel too late in the SDLC process (A, B, and D), or do not address the security needs and objectives of the system (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 459; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 551.
An organization's data policy MUST include a data retention period which is based on
application dismissal.
business procedures.
digital certificates expiration.
regulatory compliance.
An organization’s data policy must include a data retention period that is based on regulatory compliance. Regulatory compliance is the adherence to the laws, regulations, and standards that apply to the organization’s industry, sector, or jurisdiction. Regulatory compliance may dictate how long the organization must retain certain types of data, such as financial records, health records, or tax records, and how the data must be stored, protected, and disposed of. The organization must follow the regulatory compliance requirements for data retention to avoid legal liabilities, fines, or sanctions. The other options are not the basis for data retention period, as they either do not relate to the data policy (A and C), or do not have the same level of authority or obligation (B). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 68; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 74.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following could have MOST likely prevented the Peer-to-Peer (P2P) program from being installed on the computer?
Removing employee's full access to the computer
Supervising their child's use of the computer
Limiting computer's access to only the employee
Ensuring employee understands their business conduct guidelines
The best way to prevent the P2P program from being installed on the computer is to remove the employee’s full access to the computer. Full access or administrator access means that the user has the highest level of privilege or permission to perform any action or operation on the computer, such as installing, modifying, or deleting any software or file. By removing the employee’s full access to the computer, and assigning them a lower level of access, such as user or guest, the organization can restrict the employee’s ability to install unauthorized or potentially harmful programs, such as P2P programs, on the computer. Supervising their child’s use of the computer, limiting computer’s access to only the employee, and ensuring employee understands their business conduct guidelines are not the best ways to prevent the P2P program from being installed on the computer, as they are related to the monitoring, control, or awareness of the computer usage, not the restriction or limitation of the computer access. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
If an attacker in a SYN flood attack uses someone else's valid host address as the source address, the system under attack will send a large number of Synchronize/Acknowledge (SYN/ACK) packets to the
default gateway.
attacker's address.
local interface being attacked.
specified source address.
A SYN flood attack is a type of denial-of-service attack that exploits the three-way handshake mechanism of the Transmission Control Protocol (TCP). The attacker sends a large number of TCP packets with the SYN flag set, indicating a request to establish a connection, to the target system, using a spoofed source address. The target system responds with a TCP packet with the SYN and ACK flags set, indicating an acknowledgment of the request, and waits for a final TCP packet with the ACK flag set, indicating the completion of the handshake, from the source address. However, since the source address is fake, the final ACK packet never arrives, and the target system keeps the connection half-open, consuming its resources and preventing legitimate connections. Therefore, the system under attack will send a large number of SYN/ACK packets to the specified source address, which is the spoofed address used by the attacker. The default gateway, the attacker’s address, and the local interface being attacked are not the destinations of the SYN/ACK packets in a SYN flood attack. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 460. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 476.
Which of the following methods provides the MOST protection for user credentials?
Forms-based authentication
Digest authentication
Basic authentication
Self-registration
The method that provides the most protection for user credentials is digest authentication. Digest authentication is a type of authentication that verifies the identity of a user or a device by using a cryptographic hash function to transform the user credentials, such as username and password, into a digest or a hash value, before sending them over a network, such as the internet. Digest authentication can provide more protection for user credentials than basic authentication, which sends the user credentials in plain text, or forms-based authentication, which relies on the security of the web server or the web application. Digest authentication can prevent the interception, disclosure, or modification of the user credentials by third parties, and can also prevent replay attacks by using a nonce or a random value. Self-registration is not a method of authentication, but a process of creating a user account or a profile by providing some personal information, such as name, email, or phone number. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
What additional considerations are there if the third party is located in a different country?
The organizational structure of the third party and how it may impact timelines within the organization
The ability of the third party to respond to the organization in a timely manner and with accurate information
The effects of transborder data flows and customer expectations regarding the storage or processing of their data
The quantity of data that must be provided to the third party and how it is to be used
The additional considerations that are there if the third party is located in a different country are the effects of transborder data flows and customer expectations regarding the storage or processing of their data. Transborder data flows are the movements or the transfers of data across the national or the regional borders, such as the internet, the cloud, or the outsourcing. Transborder data flows can have various effects on the security, the privacy, the compliance, or the sovereignty of the data, depending on the laws, the regulations, the standards, or the cultures of the different countries or regions involved. Customer expectations are the beliefs or the assumptions of the customers about the quality, the performance, or the satisfaction of the products or the services that they use or purchase. Customer expectations can vary depending on the needs, the preferences, or the values of the customers, and they can influence the reputation, the loyalty, or the profitability of the organization. The organization should consider the effects of transborder data flows and customer expectations regarding the storage or processing of their data, as they can affect the security, the privacy, the compliance, or the sovereignty of the data, and they can impact the reputation, the loyalty, or the profitability of the organization. The organization should also consider the legal, the contractual, the ethical, or the cultural implications of the transborder data flows and customer expectations, and they should communicate, negotiate, or align with the third party and the customers accordingly. The organization should not consider the organizational structure of the third party and how it may impact timelines within the organization, the ability of the third party to respond to the organization in a timely manner and with accurate information, or the quantity of data that must be provided to the third party and how it is to be used, as they are related to the management, the communication, or the provision of the data, not the effects of transborder data flows and customer expectations regarding the storage or processing of their data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 59. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 74.
What is the PRIMARY advantage of using automated application security testing tools?
The application can be protected in the production environment.
Large amounts of code can be tested using fewer resources.
The application will fail less when tested using these tools.
Detailed testing of code functions can be performed.
Automated application security testing tools are software tools that can scan, analyze, and test the code of an application for vulnerabilities, errors, or flaws. The primary advantage of using these tools is that they can test large amounts of code using fewer resources, such as time, money, and human effort, than manual testing. This can improve the efficiency, effectiveness, and coverage of the testing process. The application can be protected in the production environment, the application will fail less when tested using these tools, and detailed testing of code functions can be performed are all possible outcomes of using automated application security testing tools, but they are not the primary advantage of using them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1017. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1039.
For a service provider, which of the following MOST effectively addresses confidentiality concerns for customers using cloud computing?
Hash functions
Data segregation
File system permissions
Non-repudiation controls
For a service provider, data segregation is the most effective way to address confidentiality concerns for customers using cloud computing. Data segregation is the process of separating the data of different customers or tenants in a shared cloud environment, so that they cannot access or interfere with each other’s data. Data segregation can be achieved by using encryption, access control, virtualization, or other techniques. Data segregation can help to protect the confidentiality, integrity, and availability of the customer’s data, as well as to comply with the privacy and regulatory requirements. Hash functions, file system permissions, and non-repudiation controls are not the most effective ways to address confidentiality concerns for customers using cloud computing, as they do not provide the same level of isolation and protection as data segregation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 337. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 353.
Which of the following is the MOST crucial for a successful audit plan?
Defining the scope of the audit to be performed
Identifying the security controls to be implemented
Working with the system owner on new controls
Acquiring evidence of systems that are not compliant
An audit is an independent and objective examination of an organization’s activities, systems, processes, or controls to evaluate their adequacy, effectiveness, efficiency, and compliance with applicable standards, policies, laws, or regulations. An audit plan is a document that outlines the objectives, scope, methodology, criteria, schedule, and resources of an audit. The most crucial element of a successful audit plan is defining the scope of the audit to be performed, which is the extent and boundaries of the audit, such as the subject matter, the time period, the locations, the departments, the functions, the systems, or the processes to be audited. The scope of the audit determines what will be included or excluded from the audit, and it helps to ensure that the audit objectives are met and the audit resources are used efficiently and effectively. Identifying the security controls to be implemented, working with the system owner on new controls, and acquiring evidence of systems that are not compliant are all important tasks in an audit, but they are not the most crucial for a successful audit plan, as they depend on the scope of the audit to be defined first. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 54. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 69.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will be the PRIMARY security concern as staff is released from the organization?
Inadequate IT support
Loss of data and separation of duties
Undocumented security controls
Additional responsibilities for remaining staff
The primary security concern as staff is released from the organization is the loss of data and separation of duties. The loss of data is the event or the situation where the data is deleted, corrupted, stolen, or leaked by the staff who are leaving the organization, either intentionally or unintentionally, and where the data is no longer available or recoverable by the organization. The loss of data can compromise the confidentiality, the integrity, and the availability of the data, and can cause damage or harm to the organization’s operations, reputation, or objectives. The separation of duties is the principle or the practice of dividing the tasks or the responsibilities among different staff or roles, to prevent or reduce the conflicts of interest, the collusion, the fraud, or the errors. The separation of duties can be compromised when the staff is released from the organization, as it can create the gaps or the overlaps in the tasks or the responsibilities, and it can increase the risk of the unauthorized or the malicious access or activity. Inadequate IT support, undocumented security controls, and additional responsibilities for remaining staff are not the primary security concerns as staff is released from the organization, as they are related to the quality, the transparency, or the workload of the IT operations, not the loss of data or the separation of duties. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 29. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 44.
An organization publishes and periodically updates its employee policies in a file on their intranet. Which of the following is a PRIMARY security concern?
Availability
Confidentiality
Integrity
Ownership
The primary security concern for an organization that publishes and periodically updates its employee policies in a file on their intranet is integrity. Integrity is the property that ensures that the data or the information is accurate, complete, consistent, and authentic, and that it has not been modified, altered, or corrupted by unauthorized or malicious parties. Integrity is a primary security concern for the employee policies file on the intranet, as it can affect the compliance, trust, and reputation of the organization, and the rights and responsibilities of the employees. The employee policies file must reflect the current and valid policies of the organization, and must not be changed or tampered with by anyone who is not authorized or qualified to do so. Availability, confidentiality, and ownership are not the primary security concerns for the employee policies file on the intranet, as they are related to the accessibility, protection, or attribution of the data or the information, not the accuracy or the authenticity of the data or the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 20. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 33.
Place the following information classification steps in sequential order.
The following information classification steps should be placed in sequential order as follows:
Information classification is a process or a method of categorizing the information assets based on their sensitivity, criticality, or value, and applying the appropriate security controls or measures to protect them. Information classification can help to ensure the confidentiality, the integrity, and the availability of the information assets, and to support the security, the compliance, or the business objectives of the organization. The information classification steps are the activities or the tasks that are involved in the information classification process, and they should be performed in a sequential order, as follows:
What is the MOST important reason to configure unique user IDs?
Supporting accountability
Reducing authentication errors
Preventing password compromise
Supporting Single Sign On (SSO)
Unique user IDs are essential for supporting accountability, which is the ability to trace actions or events to their source. Accountability is a key principle of security and helps to deter, detect, and correct unauthorized or malicious activities. Without unique user IDs, it would be difficult or impossible to identify who performed what action on a system or network. Reducing authentication errors, preventing password compromise, and supporting Single Sign On (SSO) are all possible benefits of using unique user IDs, but they are not the most important reason for configuring them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 38.
What is a common challenge when implementing Security Assertion Markup Language (SAML) for identity integration between on-premise environment and an external identity provider service?
Some users are not provisioned into the service.
SAML tokens are provided by the on-premise identity provider.
Single users cannot be revoked from the service.
SAML tokens contain user information.
A common challenge when implementing SAML for identity integration between on-premise environment and an external identity provider service is that some users are not provisioned into the service. Provisioning is a process of creating, updating, or deleting the user accounts or profiles in a service or an application, based on the user identity or credentials. When implementing SAML for identity integration, the on-premise environment acts as the identity provider, which authenticates the user and issues the SAML assertion, and the external service acts as the service provider, which receives the SAML assertion and grants access to the user. However, if the user account or profile is not provisioned or synchronized in the external service, the user may not be able to access the service, even if they have a valid SAML assertion. Therefore, a common challenge when implementing SAML for identity integration is to ensure that the user provisioning is consistent and accurate between the on-premise environment and the external service. SAML tokens are provided by the on-premise identity provider, single users can be revoked from the service, and SAML tokens contain user information are not common challenges when implementing SAML for identity integration, as they are related to the functionality, granularity, or content of the SAML protocol, not the provisioning of the user accounts or profiles. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 693. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 709.
Which of the following secure startup mechanisms are PRIMARILY designed to thwart attacks?
Timing
Cold boot
Side channel
Acoustic cryptanalysis
Side channel attacks are a type of attack that exploit the physical characteristics of a system, such as power consumption, electromagnetic radiation, timing, sound, or temperature, to extract sensitive information. Secure startup mechanisms, such as secure boot or trusted boot, are primarily designed to thwart these types of attacks by verifying the integrity and authenticity of the system components before loading them into memory. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Design, p. 201; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Architecture and Engineering, p. 331.
Which of the following is the BEST reason to review audit logs periodically?
Verify they are operating properly
Monitor employee productivity
Identify anomalies in use patterns
Meet compliance regulations
The best reason to review audit logs periodically is to identify anomalies in use patterns that may indicate unauthorized or malicious activities, such as intrusion attempts, data breaches, policy violations, or system errors. Audit logs record the events and actions that occur on a system or network, and can provide valuable information for security analysis, investigation, and response. The other options are not as good as identifying anomalies, as they either do not relate to security (B), or are not the primary purpose of audit logs (A and D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 405; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 465.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will indicate where the IT budget is BEST allocated during this time?
Policies
Frameworks
Metrics
Guidelines
The best indicator of where the IT budget is best allocated during this time is the metrics. The metrics are the measurements or the indicators of the performance, the effectiveness, the efficiency, or the quality of the IT processes, activities, or outcomes. The metrics can help to allocate the IT budget in a rational, objective, and evidence-based manner, as they can show the value, the impact, or the return of the IT investments, and they can identify the gaps, the risks, or the opportunities for the IT improvement or enhancement. The metrics can also help to justify, communicate, or report the IT budget allocation to the senior management or the stakeholders, and to align the IT budget allocation with the business needs and requirements. Policies, frameworks, and guidelines are not the best indicators of where the IT budget is best allocated during this time, as they are related to the documents or the models that define, guide, or standardize the IT processes, activities, or outcomes, not the measurements or the indicators of the IT performance, effectiveness, efficiency, or quality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 38. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 53.
Which of the following is a MAJOR consideration in implementing a Voice over IP (VoIP) network?
Use of a unified messaging.
Use of separation for the voice network.
Use of Network Access Control (NAC) on switches.
Use of Request for Comments (RFC) 1918 addressing.
The use of Network Access Control (NAC) on switches is a major consideration in implementing a Voice over IP (VoIP) network. NAC is a mechanism that enforces security policies on the network devices, such as switches, routers, firewalls, and servers. NAC can prevent unauthorized or compromised devices from accessing the network, or limit their access to specific segments or resources. NAC can also monitor and remediate the devices for compliance with the security policies, such as patch level, antivirus status, or configuration settings. NAC can enhance the security and performance of a VoIP network, as well as reduce the operational costs and risks. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 473; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 353.
A risk assessment report recommends upgrading all perimeter firewalls to mitigate a particular finding. Which of the following BEST supports this recommendation?
The inherent risk is greater than the residual risk.
The Annualized Loss Expectancy (ALE) approaches zero.
The expected loss from the risk exceeds mitigation costs.
The infrastructure budget can easily cover the upgrade costs.
The best factor that supports the recommendation of upgrading all perimeter firewalls to mitigate a particular finding is that the expected loss from the risk exceeds mitigation costs. The expected loss from the risk is the product of the probability of occurrence and the impact of the risk, which can be measured by the Annualized Loss Expectancy (ALE). The mitigation costs are the expenses associated with implementing the security controls or countermeasures to reduce the risk. If the expected loss from the risk exceeds the mitigation costs, then it is economically justified to invest in the mitigation strategy, such as upgrading the firewalls. The inherent risk is greater than the residual risk, the ALE approaches zero, and the infrastructure budget can easily cover the upgrade costs are not the best factors that support the recommendation, as they do not indicate the cost-benefit analysis or the return on investment of the mitigation strategy. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Which of the following is a process within a Systems Engineering Life Cycle (SELC) stage?
Requirements Analysis
Development and Deployment
Production Operations
Utilization Support
Requirements analysis is a process within the Systems Engineering Life Cycle (SELC) stage of Concept Development. It involves defining the problem, identifying the stakeholders, eliciting the requirements, analyzing the requirements, and validating the requirements. Requirements analysis is essential for ensuring that the system meets the needs and expectations of the users and customers. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Architecture and Engineering, p. 295; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Design, p. 149.
An organization decides to implement a partial Public Key Infrastructure (PKI) with only the servers having digital certificates. What is the security benefit of this implementation?
Clients can authenticate themselves to the servers.
Mutual authentication is available between the clients and servers.
Servers are able to issue digital certificates to the client.
Servers can authenticate themselves to the client.
A Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating, managing, distributing, using, storing, and revoking digital certificates, which are electronic documents that bind a public key to an identity. A digital certificate can be used to authenticate the identity of an entity, such as a person, a device, or a server, that possesses the corresponding private key. An organization can implement a partial PKI with only the servers having digital certificates, which means that only the servers can prove their identity to the clients, but not vice versa. The security benefit of this implementation is that servers can authenticate themselves to the client, which can prevent impersonation, spoofing, or man-in-the-middle attacks by malicious servers. Clients can authenticate themselves to the servers, mutual authentication is available between the clients and servers, and servers are able to issue digital certificates to the client are not the security benefits of this implementation, as they require the clients to have digital certificates as well. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 631.
Which of the following violates identity and access management best practices?
User accounts
System accounts
Generic accounts
Privileged accounts
The type of accounts that violates identity and access management best practices is generic accounts. Generic accounts are accounts that are shared by multiple users or devices, and do not have a specific or unique identity associated with them. Generic accounts are often used for convenience, compatibility, or legacy reasons, but they pose a serious security risk, as they can compromise the accountability, traceability, and auditability of the actions and activities performed by the users or devices. Generic accounts can also enable unauthorized or malicious access, as they may have weak or default passwords, or may not have proper access control or monitoring mechanisms. User accounts, system accounts, and privileged accounts are not the types of accounts that violate identity and access management best practices, as they are accounts that have a specific or unique identity associated with them, and can be subject to proper authentication, authorization, and auditing measures. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
What is the BEST first step for determining if the appropriate security controls are in place for protecting data at rest?
Identify regulatory requirements
Conduct a risk assessment
Determine business drivers
Review the security baseline configuration
A risk assessment is the best first step for determining if the appropriate security controls are in place for protecting data at rest. A risk assessment involves identifying the assets, threats, vulnerabilities, and impacts related to the data, as well as the likelihood and severity of potential breaches. Based on the risk assessment, the appropriate security controls can be selected and implemented to mitigate the risks to an acceptable level. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 35; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 41.
According to best practice, which of the following groups is the MOST effective in performing an information security compliance audit?
In-house security administrators
In-house Network Team
Disaster Recovery (DR) Team
External consultants
According to best practice, the most effective group in performing an information security compliance audit is external consultants. External consultants are independent and objective third parties that can provide unbiased and impartial assessment of the organization’s compliance with the security policies, standards, and regulations. External consultants can also bring expertise, experience, and best practices from other organizations and industries, and offer recommendations for improvement. The other options are not as effective as external consultants, as they either have a conflict of interest or lack of independence (A and B), or do not have the primary role or responsibility of conducting compliance audits ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 240; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 302.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
When determining appropriate resource allocation, which of the following is MOST important to monitor?
Number of system compromises
Number of audit findings
Number of staff reductions
Number of additional assets
The most important factor to monitor when determining appropriate resource allocation is the number of system compromises. The number of system compromises is the count or the frequency of the security incidents or breaches that affect the confidentiality, the integrity, or the availability of the system data or functionality, and that are caused by the unauthorized or the malicious access or activity. The number of system compromises can help to determine appropriate resource allocation, as it can indicate the level of security risk or threat that the system faces, and the level of security protection or improvement that the system needs. The number of system compromises can also help to evaluate the effectiveness or the efficiency of the current resource allocation, and to identify the areas or the domains that require more or less resources. Number of audit findings, number of staff reductions, and number of additional assets are not the most important factors to monitor when determining appropriate resource allocation, as they are related to the results or the outcomes of the audit process, the changes or the impacts of the staff size, or the additions or the expansions of the system resources, not the security incidents or breaches that affect the system data or functionality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 863. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 879.
What component of a web application that stores the session state in a cookie an attacker can bypass?
An initialization check
An identification check
An authentication check
An authorization check
An authorization check is a component of a web application that stores the session state in a cookie that can be bypassed by an attacker. An authorization check verifies that the user has the appropriate permissions to access the requested resources or perform the desired actions. However, if the session state is stored in a cookie, an attacker can manipulate the cookie to change the user’s role or privileges, and bypass the authorization check. Therefore, it is recommended to store the session state on the server side, or use encryption and integrity protection for the cookie. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, p. 1015; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, p. 503.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
What MUST the plan include in order to reduce client-side exploitation?
Approved web browsers
Network firewall procedures
Proxy configuration
Employee education
The plan must include employee education in order to reduce client-side exploitation. Employee education is a process of providing the employees with the necessary knowledge, skills, and awareness to follow the security policies and procedures, and to prevent or avoid the common security threats or risks, such as client-side exploitation. Client-side exploitation is a type of attack that targets the vulnerabilities or weaknesses of the client applications or systems, such as web browsers, email clients, or media players, and that can compromise the client data or functionality, or allow the attacker to gain access to the network or the server. Employee education can help to reduce client-side exploitation by teaching the employees how to recognize and avoid the malicious or suspicious links, attachments, or downloads, how to update and patch their client applications or systems, how to use the security tools or features, such as antivirus or firewall, and how to report or respond to any security incidents or breaches. Approved web browsers, network firewall procedures, and proxy configuration are not the plan components that must be included in order to reduce client-side exploitation, as they are related to the technical or administrative controls or measures, not the human or behavioral factors, that can affect the client-side security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Refer to the information below to answer the question.
Desktop computers in an organization were sanitized for re-use in an equivalent security environment. The data was destroyed in accordance with organizational policy and all marking and other external indications of the sensitivity of the data that was formerly stored on the magnetic drives were removed.
Organizational policy requires the deletion of user data from Personal Digital Assistant (PDA) devices before disposal. It may not be possible to delete the user data if the device is malfunctioning. Which destruction method below provides the BEST assurance that the data has been removed?
Knurling
Grinding
Shredding
Degaussing
The best destruction method that provides the assurance that the data has been removed from a malfunctioning PDA device is shredding. Shredding is a method of physically destroying the media, such as flash memory cards, by cutting or tearing them into small pieces that make the data unrecoverable. Shredding can be effective in removing the data from a PDA device that cannot be deleted by software or firmware methods, as it does not depend on the functionality of the device or the media. Shredding can also prevent the reuse or the recycling of the media or the device, as it renders them unusable. Knurling, grinding, and degaussing are not the best destruction methods that provide the assurance that the data has been removed from a malfunctioning PDA device, as they are related to the methods of altering the surface, the shape, or the magnetic field of the media, not the methods of cutting or tearing the media into small pieces. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 889. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 905.
Which of the following is the MOST effective attack against cryptographic hardware modules?
Plaintext
Brute force
Power analysis
Man-in-the-middle (MITM)
The most effective attack against cryptographic hardware modules is power analysis. Power analysis is a type of side-channel attack that exploits the physical characteristics or behavior of a cryptographic device, such as a smart card, a hardware security module, or a cryptographic processor, to extract secret information, such as keys, passwords, or algorithms. Power analysis measures the power consumption or the electromagnetic radiation of the device, and analyzes the variations or patterns that correspond to the cryptographic operations or the data being processed. Power analysis can reveal the internal state or the logic of the device, and can bypass the security mechanisms or the tamper resistance of the device. Power analysis can be performed with low-cost and widely available equipment, and can be very difficult to detect or prevent. Plaintext, brute force, and man-in-the-middle (MITM) are not the most effective attacks against cryptographic hardware modules, as they are related to the encryption or transmission of the data, not the physical properties or behavior of the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 628. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 644.
Refer to the information below to answer the question.
During the investigation of a security incident, it is determined that an unauthorized individual accessed a system which hosts a database containing financial information.
Aside from the potential records which may have been viewed, which of the following should be the PRIMARY concern regarding the database information?
Unauthorized database changes
Integrity of security logs
Availability of the database
Confidentiality of the incident
The primary concern regarding the database information, aside from the potential records which may have been viewed, is the unauthorized database changes. The unauthorized database changes are the modifications or the alterations of the database information or structure, such as the data values, the data types, the data formats, the data relationships, or the data schemas, by an unauthorized individual or a malicious actor, such as the one who accessed the system hosting the database. The unauthorized database changes can compromise the integrity, the accuracy, the consistency, and the reliability of the database information, and can cause serious damage or harm to the organization’s operations, decisions, or reputation. The unauthorized database changes can also affect the availability, the performance, or the functionality of the database, and can create or exploit the vulnerabilities or the weaknesses of the database. Integrity of security logs, availability of the database, and confidentiality of the incident are not the primary concerns regarding the database information, aside from the potential records which may have been viewed, as they are related to the evidence, the accessibility, or the secrecy of the security incident, not the modification or the alteration of the database information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 865. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 881.
Which of the following is the PRIMARY benefit of a formalized information classification program?
It drives audit processes.
It supports risk assessment.
It reduces asset vulnerabilities.
It minimizes system logging requirements.
A formalized information classification program is a set of policies and procedures that define the categories, criteria, and responsibilities for classifying information assets according to their value, sensitivity, and criticality. The primary benefit of such a program is that it supports risk assessment, which is the process of identifying, analyzing, and evaluating the risks to the information assets and the organization. By classifying information assets, the organization can prioritize the protection of the most important and vulnerable assets, determine the appropriate security controls and measures, and allocate the necessary resources and budget. It drives audit processes, it reduces asset vulnerabilities, and it minimizes system logging requirements are all possible benefits of a formalized information classification program, but they are not the primary benefit of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 52.
Which of the following is a BEST practice when traveling internationally with laptops containing Personally Identifiable Information (PII)?
Use a thumb drive to transfer information from a foreign computer.
Do not take unnecessary information, including sensitive information.
Connect the laptop only to well-known networks like the hotel or public Internet cafes.
Request international points of contact help scan the laptop on arrival to ensure it is protected.
The best practice when traveling internationally with laptops containing Personally Identifiable Information (PII) is to do not take unnecessary information, including sensitive information. PII is any information that can be used to identify, contact, or locate a specific individual, such as name, address, phone number, email, social security number, or biometric data. PII is subject to various privacy and security laws and regulations, and must be protected from unauthorized access, use, disclosure, or theft. When traveling internationally with laptops containing PII, the best practice is to minimize the amount and type of PII that is stored or processed on the laptop, and to take only the information that is absolutely necessary for the business purpose. This can reduce the risk of losing, exposing, or compromising the PII, and the potential legal or reputational consequences. Using a thumb drive to transfer information from a foreign computer, connecting the laptop only to well-known networks like the hotel or public Internet cafes, and requesting international points of contact help scan the laptop on arrival to ensure it is protected are not the best practices when traveling internationally with laptops containing PII, as they may still expose the PII to various threats, such as malware, interception, or tampering, and may not comply with the privacy and security requirements of different countries or regions. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 43. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 56.
Why is a system's criticality classification important in large organizations?
It provides for proper prioritization and scheduling of security and maintenance tasks.
It reduces critical system support workload and reduces the time required to apply patches.
It allows for clear systems status communications to executive management.
It provides for easier determination of ownership, reducing confusion as to the status of the asset.
A system’s criticality classification is important in large organizations because it provides for proper prioritization and scheduling of security and maintenance tasks. A system’s criticality classification is the level of importance or impact that a system has on the organization’s mission, objectives, operations, or functions. A system’s criticality classification may depend on factors such as the system’s availability, integrity, confidentiality, functionality, performance, or reliability. A system’s criticality classification helps the organization to allocate resources, implement controls, perform audits, apply patches, conduct backups, and respond to incidents according to the system’s priority and risk. A system’s criticality classification does not necessarily reduce critical system support workload or the time required to apply patches, as these may depend on other factors such as the system’s complexity, configuration, or vulnerability. A system’s criticality classification may allow for clear systems status communications to executive management, but this is not the primary reason for its importance. A system’s criticality classification may provide for easier determination of ownership, but this is not the main benefit of its importance.
Which of the following is a potential risk when a program runs in privileged mode?
It may serve to create unnecessary code complexity
It may not enforce job separation duties
It may create unnecessary application hardening
It may allow malicious code to be inserted
A potential risk when a program runs in privileged mode is that it may allow malicious code to be inserted. Privileged mode, also known as kernel mode or supervisor mode, is a mode of operation that grants the program full access and control over the hardware and software resources of the system, such as memory, disk, CPU, and devices. A program that runs in privileged mode can perform any action or instruction without any restriction or protection. This can be exploited by an attacker who can inject malicious code into the program, such as a rootkit, a backdoor, or a keylogger, and gain unauthorized access or control over the system . References: : What is Privileged Mode? : Privilege Escalation - OWASP Cheat Sheet Series
Why must all users be positively identified prior to using multi-user computers?
To provide access to system privileges
To provide access to the operating system
To ensure that unauthorized persons cannot access the computers
To ensure that management knows what users are currently logged on
The main reason why all users must be positively identified prior to using multi-user computers is to ensure that unauthorized persons cannot access the computers. Positive identification is the process of verifying the identity of a user or a device before granting access to a system or a resource2. Positive identification can be achieved by using one or more factors of authentication, such as something the user knows, has, or is. Positive identification can enhance the security and accountability of the system, and prevent unauthorized or malicious access. Providing access to system privileges, providing access to the operating system, and ensuring that management knows what users are currently logged on are not the primary reasons why all users must be positively identified prior to using multi-user computers, as they are more related to the functionality or administration of the system, rather than the security. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 89.
Which of the following is an essential element of a privileged identity lifecycle management?
Regularly perform account re-validation and approval
Account provisioning based on multi-factor authentication
Frequently review performed activities and request justification
Account information to be provided by supervisor or line manager
A privileged identity lifecycle management is a process of managing the access rights and activities of users who have elevated permissions to access sensitive data or resources in an organization2. An essential element of a privileged identity lifecycle management is to regularly perform account re-validation and approval, which means verifying that the privileged users still need their access rights and have them approved by the appropriate authority. This can help prevent unauthorized or excessive access, reduce the risk of insider threats, and ensure compliance with policies and regulations. Account provisioning based on multi-factor authentication, frequently review performed activities and request justification, and account information to be provided by supervisor or line manager are also important aspects of a privileged identity lifecycle management, but they are not as essential as account re-validation and approval. References: 2: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 283.
Copyright provides protection for which of the following?
Ideas expressed in literary works
A particular expression of an idea
New and non-obvious inventions
Discoveries of natural phenomena
Copyright is a form of intellectual property that grants the author or creator of an original work the exclusive right to reproduce, distribute, perform, display, or license the work. Copyright does not protect ideas, concepts, facts, discoveries, or methods, but only the particular expression of an idea in a tangible medium, such as a book, a song, a painting, or a software program12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 2872: CISSP For Dummies, 7th Edition, Chapter 3, page 87.
Which of the following is considered best practice for preventing e-mail spoofing?
Spam filtering
Cryptographic signature
Uniform Resource Locator (URL) filtering
Reverse Domain Name Service (DNS) lookup
The best practice for preventing e-mail spoofing is to use cryptographic signatures. E-mail spoofing is a technique that involves forging the sender’s address or identity in an e-mail message, usually to trick the recipient into opening a malicious attachment, clicking on a phishing link, or disclosing sensitive information. Cryptographic signatures are digital signatures that are created by encrypting the e-mail message or a part of it with the sender’s private key, and attaching it to the e-mail message. Cryptographic signatures can be used to verify the authenticity and integrity of the sender and the message, and to prevent e-mail spoofing5 . References: 5: What is Email Spoofing? : How to Prevent Email Spoofing
Which of the following actions should be performed when implementing a change to a database schema in a production system?
Test in development, determine dates, notify users, and implement in production
Apply change to production, run in parallel, finalize change in production, and develop a back-out strategy
Perform user acceptance testing in production, have users sign off, and finalize change
Change in development, perform user acceptance testing, develop a back-out strategy, and implement change
The best practice for implementing a change to a database schema in a production system is to follow a change management process that includes the following steps: Change in development, perform user acceptance testing, develop a back-out strategy, and implement change. This ensures that the change is properly tested, approved, documented, and communicated, and that there is a contingency plan in case of failure or unexpected results12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 8232: CISSP For Dummies, 7th Edition, Chapter 8, page 263.
Alternate encoding such as hexadecimal representations is MOST often observed in which of the following forms of attack?
Smurf
Rootkit exploit
Denial of Service (DoS)
Cross site scripting (XSS)
Alternate encoding such as hexadecimal representations is most often observed in cross site scripting (XSS) attacks. XSS is a type of web application attack that involves injecting malicious code or scripts into a web page or a web application, usually through user input fields or parameters. The malicious code or script is then executed by the victim’s browser, and can perform various actions, such as stealing cookies, session tokens, or credentials, redirecting to malicious sites, or displaying fake content. Alternate encoding is a technique that is used by attackers to bypass input validation or filtering mechanisms, and to conceal or obfuscate the malicious code or script. Alternate encoding can use hexadecimal, decimal, octal, binary, or Unicode representations of the characters or symbols in the code or script . References: : What is Cross-Site Scripting (XSS)? : XSS Filter Evasion Cheat Sheet
As one component of a physical security system, an Electronic Access Control (EAC) token is BEST known for its ability to
overcome the problems of key assignments.
monitor the opening of windows and doors.
trigger alarms when intruders are detected.
lock down a facility during an emergency.
An Electronic Access Control (EAC) token is best known for its ability to overcome the problems of key assignments in a physical security system. An EAC token is a device that can be used to authenticate a user or grant access to a physical area or resource, such as a door, a gate, or a locker2. An EAC token can be a smart card, a magnetic stripe card, a proximity card, a key fob, or a biometric device. An EAC token can overcome the problems of key assignments, which are the issues or challenges of managing and distributing physical keys to authorized users, such as lost, stolen, duplicated, or unreturned keys. An EAC token can provide more security, convenience, and flexibility than a physical key, as it can be easily activated, deactivated, or replaced, and it can also store additional information or perform other functions. Monitoring the opening of windows and doors, triggering alarms when intruders are detected, and locking down a facility during an emergency are not the abilities that an EAC token is best known for, as they are more related to the functions of other components of a physical security system, such as sensors, alarms, or locks. References: 2: CISSP For Dummies, 7th Edition, Chapter 9, page 253.
Which one of the following is a threat related to the use of web-based client side input validation?
Users would be able to alter the input after validation has occurred
The web server would not be able to validate the input after transmission
The client system could receive invalid input from the web server
The web server would not be able to receive invalid input from the client
A threat related to the use of web-based client side input validation is that users would be able to alter the input after validation has occurred. Client side input validation is performed on the user’s browser using JavaScript or other scripting languages. It can provide a faster and more user-friendly feedback to the user, but it can also be easily bypassed or manipulated by an attacker who disables JavaScript, uses a web proxy, or modifies the source code of the web page. Therefore, client side input validation should not be relied upon as the sole or primary method of preventing malicious or malformed input from reaching the web server. Server side input validation is also necessary to ensure the security and integrity of the web application56. References: 5: Input Validation - OWASP Cheat Sheet Series76: Input Validation vulnerabilities and how to fix them
An engineer in a software company has created a virus creation tool. The tool can generate thousands of polymorphic viruses. The engineer is planning to use the tool in a controlled environment to test the company's next generation virus scanning software. Which would BEST describe the behavior of the engineer and why?
The behavior is ethical because the tool will be used to create a better virus scanner.
The behavior is ethical because any experienced programmer could create such a tool.
The behavior is not ethical because creating any kind of virus is bad.
The behavior is not ethical because such a tool could be leaked on the Internet.
Creating a virus creation tool that can generate thousands of polymorphic viruses is not ethical, even if the intention is to use it in a controlled environment to test the company’s next generation virus scanning software. Such a tool could be leaked on the Internet, either intentionally or accidentally, and fall into the hands of malicious actors who could use it to create and spread harmful viruses that could compromise the security and privacy of millions of users and systems. The engineer should follow the code of ethics and professional conduct of the ISC2, which states that members and certificate holders shall protect society, the common good, necessary public trust and confidence, and the infrastructure . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 18. : CISSP For Dummies, 7th Edition, Chapter 1, page 11.
Which one of the following considerations has the LEAST impact when considering transmission security?
Network availability
Data integrity
Network bandwidth
Node locations
Network bandwidth is the least important consideration when considering transmission security, as it is more related to the performance or efficiency of the network, rather than the security or protection of the data. Network bandwidth is the amount of data that can be transmitted or received over a network in a given time period, and it can affect the speed or quality of the communication1. However, network bandwidth does not directly impact the confidentiality, integrity, or availability of the data, which are the main goals of transmission security. Network availability, data integrity, and node locations are more important considerations when considering transmission security, as they can affect the ability to access, verify, or protect the data from unauthorized or malicious parties. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 402.
Which of the following is an appropriate source for test data?
Production data that is secured and maintained only in the production environment.
Test data that has no similarities to production datA.
Test data that is mirrored and kept up-to-date with production datA.
Production data that has been sanitized before loading into a test environment.
The most appropriate source for test data is production data that has been sanitized before loading into a test environment. Sanitization is the process of removing or modifying sensitive or confidential information from the data, such as personal identifiers, financial records, or trade secrets. Sanitized data preserves the characteristics and structure of the original data, but reduces the risk of exposing or compromising the data in the test environment. Production data that is secured and maintained only in the production environment is not a suitable source for test data, as it may not be accessible or available for testing purposes. Test data that has no similarities to production data is not a realistic or reliable source for test data, as it may not reflect the actual scenarios or conditions that the system will encounter in the production environment. Test data that is mirrored and kept up-to-date with production data is not a secure or ethical source for test data, as it may violate the privacy or confidentiality of the data owners or subjects, and expose the data to unauthorized access or modification in the test environment. References: 4: Data Sanitization: What It Is and How to Implement It55: Test Data Management: Best Practices and Methodologies
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to what?
Interface with the Public Key Infrastructure (PKI)
Improve the quality of security software
Prevent Denial of Service (DoS) attacks
Establish a secure initial state
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to establish a secure initial state. A TPM is a hardware device that provides cryptographic functions and secure storage for keys, certificates, passwords, and other sensitive data. A TPM can also measure and verify the integrity of the system components, such as the BIOS, boot loader, operating system, and applications, before they are executed. This process is known as trusted boot or measured boot, and it ensures that the system is in a known and trusted state before allowing access to the user or network. A TPM can also enable features such as disk encryption, remote attestation, and platform authentication12. References: 1: What is a Trusted Platform Module (TPM)?32: Trusted Platform Module (TPM) Fundamentals4
Why MUST a Kerberos server be well protected from unauthorized access?
It contains the keys of all clients.
It always operates at root privilege.
It contains all the tickets for services.
It contains the Internet Protocol (IP) address of all network entities.
A Kerberos server must be well protected from unauthorized access because it contains the keys of all clients. Kerberos is a network authentication protocol that uses symmetric cryptography and a trusted third party, called the Key Distribution Center (KDC), to provide secure and mutual authentication between clients and servers2. The KDC consists of two components: the Authentication Server (AS) and the Ticket Granting Server (TGS). The AS issues a Ticket Granting Ticket (TGT) to the client after verifying its identity and password. The TGS issues a service ticket to the client after validating its TGT and the requested service. The client then uses the service ticket to access the service. The KDC stores the keys of all clients and services in its database, and uses them to encrypt and decrypt the tickets. If an attacker gains access to the KDC, they can compromise the keys and the tickets, and impersonate any client or service on the network. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 91.
The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks provide
data integrity.
defense in depth.
data availability.
non-repudiation.
Defense in depth is a security strategy that involves applying multiple layers of protection to a system or network to prevent or mitigate attacks. The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks are examples of defense in depth measures that can enhance the security of the system or network.
A, C, and D are incorrect because they are not the best terms to describe the security strategy. Data integrity is a property of data that ensures its accuracy, consistency, and validity. Data availability is a property of data that ensures its accessibility and usability. Non-repudiation is a property of data that ensures its authenticity and accountability. While these properties are important for security, they are not the same as defense in depth.
The three PRIMARY requirements for a penetration test are
A defined goal, limited time period, and approval of management
A general objective, unlimited time, and approval of the network administrator
An objective statement, disclosed methodology, and fixed cost
A stated objective, liability waiver, and disclosed methodology
The three primary requirements for a penetration test are a defined goal, a limited time period, and an approval of management. A penetration test is a type of security assessment that simulates a malicious attack on an information system or network, with the permission of the owner, to identify and exploit vulnerabilities and evaluate the security posture of the system or network. A penetration test requires a defined goal, which is the specific objective or scope of the test, such as testing a particular system, network, application, or function. A penetration test also requires a limited time period, which is the duration or deadline of the test, such as a few hours, days, or weeks. A penetration test also requires an approval of management, which is the formal authorization and consent from the senior management of the organization that owns the system or network to be tested, as well as the management of the organization that conducts the test. A general objective, unlimited time, and approval of the network administrator are not the primary requirements for a penetration test, as they may not provide a clear and realistic direction, scope, and authorization for the test.
To prevent inadvertent disclosure of restricted information, which of the following would be the LEAST effective process for eliminating data prior to the media being discarded?
Multiple-pass overwriting
Degaussing
High-level formatting
Physical destruction
The least effective process for eliminating data prior to the media being discarded is high-level formatting. High-level formatting is the process of preparing a storage device, such as a hard disk or a flash drive, for data storage by creating a file system and marking the bad sectors. However, high-level formatting does not erase the data that was previously stored on the device. The data can still be recovered using data recovery tools or forensic techniques. To prevent inadvertent disclosure of restricted information, more secure methods of data sanitization should be used, such as multiple-pass overwriting, degaussing, or physical destruction34. References: 3: Delete Sensitive Data before Discarding Your Media94: Best Practices for Media Destruction10
The stringency of an Information Technology (IT) security assessment will be determined by the
system's past security record.
size of the system's database.
sensitivity of the system's datA.
age of the system.
The stringency of an Information Technology (IT) security assessment will be determined by the sensitivity of the system’s data, as this reflects the level of risk and impact that a security breach could have on the organization and its stakeholders. The more sensitive the data, the more stringent the security assessment should be, as it should cover more aspects of the system, use more rigorous methods and tools, and provide more detailed and accurate results and recommendations. The system’s past security record, size of the system’s database, and age of the system are not the main factors that determine the stringency of the security assessment, as they do not directly relate to the value and importance of the data that the system processes, stores, or transmits . References: 3: Common Criteria for Information Technology Security Evaluation 4: Information technology security assessment - Wikipedia
In a basic SYN flood attack, what is the attacker attempting to achieve?
Exceed the threshold limit of the connection queue for a given service
Set the threshold to zero for a given service
Cause the buffer to overflow, allowing root access
Flush the register stack, allowing hijacking of the root account
A SYN flood attack is a type of denial-of-service attack that exploits the TCP three-way handshake process. The attacker sends a large number of SYN packets to the target server, often with spoofed IP addresses, and does not complete the handshake by sending the final ACK packet. This causes the server to allocate resources for half-open connections, which eventually consume all the available ports and prevent legitimate traffic from reaching the server
A practice that permits the owner of a data object to grant other users access to that object would usually provide
Mandatory Access Control (MAC).
owner-administered control.
owner-dependent access control.
Discretionary Access Control (DAC).
A practice that permits the owner of a data object to grant other users access to that object would usually provide Discretionary Access Control (DAC). DAC is a type of access control that allows the data owner or creator to decide who can access or modify the data object, based on their identity or membership in a group. DAC is implemented using access control lists (ACLs), which specify the permissions or rights of each user or group for each data object. DAC is flexible and easy to implement, but it can also pose a security risk if the data owner grants excessive or inappropriate access to unauthorized or malicious users. Mandatory Access Control (MAC), owner-administered control, and owner-dependent access control are not types of access control that permit the owner of a data object to grant other users access to that object, as they are either based on predefined rules or policies, or not related to access control at all. References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 354.
Which of the following BEST represents the principle of open design?
Disassembly, analysis, or reverse engineering will reveal the security functionality of the computer system.
Algorithms must be protected to ensure the security and interoperability of the designed system.
A knowledgeable user should have limited privileges on the system to prevent their ability to compromise security capabilities.
The security of a mechanism should not depend on the secrecy of its design or implementation.
This is the principle of open design, which states that the security of a system or mechanism should rely on the strength of its key or algorithm, rather than on the obscurity of its design or implementation. This principle is based on the assumption that the adversary has full knowledge of the system or mechanism, and that the security should still hold even if that is the case. The other options are not consistent with the principle of open design, as they either imply that the security depends on hiding or protecting the design or implementation (A and B), or that the user’s knowledge or privileges affect the security ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 109.
Multi-threaded applications are more at risk than single-threaded applications to
race conditions.
virus infection.
packet sniffing.
database injection.
Multi-threaded applications are more at risk than single-threaded applications to race conditions. A race condition is a type of concurrency error that occurs when two or more threads access or modify the same shared resource without proper synchronization or coordination. This may result in inconsistent, unpredictable, or erroneous outcomes, as the final result depends on the timing and order of the thread execution. Race conditions can compromise the security, reliability, and functionality of the application, and can lead to data corruption, memory leaks, deadlock, or privilege escalation12. References: 1: What is a Race Condition?32: Race Conditions - OWASP Cheat Sheet Series4
The overall goal of a penetration test is to determine a system's
ability to withstand an attack.
capacity management.
error recovery capabilities.
reliability under stress.
A penetration test is a simulated attack on a system or network, performed by authorized testers, to evaluate the security posture and identify vulnerabilities that could be exploited by malicious actors. The overall goal of a penetration test is to determine the system’s ability to withstand an attack, and to provide recommendations for improving the security controls and mitigating the risks12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 7572: CISSP For Dummies, 7th Edition, Chapter 7, page 233.
Which of the following is the MAIN reason that system re-certification and re-accreditation are needed?
To assist data owners in making future sensitivity and criticality determinations
To assure the software development team that all security issues have been addressed
To verify that security protection remains acceptable to the organizational security policy
To help the security team accept or reject new systems for implementation and production
The main reason that system re-certification and re-accreditation are needed is to verify that the security protection of the system remains acceptable to the organizational security policy, especially after significant changes or updates to the system. Re-certification is the process of reviewing and testing the security controls of the system to ensure that they are still effective and compliant with the security policy. Re-accreditation is the process of authorizing the system to operate based on the results of the re-certification. The other options are not the main reason for system re-certification and re-accreditation, as they either do not relate to the security protection of the system (A and D), or do not involve re-certification and re-accreditation (B). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 695.
The goal of software assurance in application development is to
enable the development of High Availability (HA) systems.
facilitate the creation of Trusted Computing Base (TCB) systems.
prevent the creation of vulnerable applications.
encourage the development of open source applications.
The goal of software assurance in application development is to prevent the creation of vulnerable applications. Software assurance is the process of ensuring that the software is designed, developed, and maintained in a secure, reliable, and trustworthy manner. Software assurance involves applying security principles, standards, and best practices throughout the software development life cycle, such as security requirements, design, coding, testing, deployment, and maintenance. Software assurance aims to prevent or reduce the introduction of vulnerabilities, defects, or errors in the software that could compromise its security, functionality, or quality . References: : Software Assurance : Software Assurance - OWASP Cheat Sheet Series
The BEST method of demonstrating a company's security level to potential customers is
a report from an external auditor.
responding to a customer's security questionnaire.
a formal report from an internal auditor.
a site visit by a customer's security team.
The best method of demonstrating a company’s security level to potential customers is a report from an external auditor, who is an independent and qualified third party that evaluates the company’s security policies, procedures, controls, and practices against a set of standards or criteria, such as ISO 27001, NIST, or COBIT. A report from an external auditor provides an objective and credible assessment of the company’s security posture, and may also include recommendations for improvement or certification . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 47. : CISSP For Dummies, 7th Edition, Chapter 1, page 29.
Which of the following does Temporal Key Integrity Protocol (TKIP) support?
Multicast and broadcast messages
Coordination of IEEE 802.11 protocols
Wired Equivalent Privacy (WEP) systems
Synchronization of multiple devices
Temporal Key Integrity Protocol (TKIP) supports multicast and broadcast messages by using a group temporal key that is shared by all the devices in the same wireless network. This key is used to encrypt and decrypt the messages that are sent to multiple recipients at once. TKIP also supports unicast messages by using a pairwise temporal key that is unique for each device and session. TKIP does not support coordination of IEEE 802.11 protocols, as it is a protocol itself that was designed to replace WEP. TKIP is compatible with WEP systems, but it does not support them, as it provides more security features than WEP. TKIP does not support synchronization of multiple devices, as it does not provide any clock or time synchronization mechanism . References: 1: Temporal Key Integrity Protocol - Wikipedia 2: Wi-Fi Security: Should You Use WPA2-AES, WPA2-TKIP, or Both? - How-To Geek
Which of the following is a limitation of the Common Vulnerability Scoring System (CVSS) as it relates to conducting code review?
It has normalized severity ratings.
It has many worksheets and practices to implement.
It aims to calculate the risk of published vulnerabilities.
It requires a robust risk management framework to be put in place.
The Common Vulnerability Scoring System (CVSS) is a framework that provides a standardized and consistent way of measuring and communicating the severity and risk of published vulnerabilities. CVSS assigns a numerical score and a vector string to each vulnerability, based on various metrics and formulas. CVSS is a useful tool for prioritizing the remediation of vulnerabilities, but it has some limitations as it relates to conducting code review. One of the limitations is that CVSS aims to calculate the risk of published vulnerabilities, which means that it does not cover the vulnerabilities that are not yet discovered or disclosed. Code review, on the other hand, is a process of examining the source code of a software to identify and fix any errors, bugs, or vulnerabilities that may exist in the code. Code review can help find vulnerabilities that are not yet published, and therefore not scored by CVSS. References: : CISSP For Dummies, 7th Edition, Chapter 8, page 222. : Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 465.
An advantage of link encryption in a communications network is that it
makes key management and distribution easier.
protects data from start to finish through the entire network.
improves the efficiency of the transmission.
encrypts all information, including headers and routing information.
An advantage of link encryption in a communications network is that it encrypts all information, including headers and routing information. Link encryption is a type of encryption that is applied at the data link layer of the OSI model, and encrypts the entire packet or frame as it travels from one node to another1. Link encryption can protect the confidentiality and integrity of the data, as well as the identity and location of the nodes. Link encryption does not make key management and distribution easier, as it requires each node to have a separate key for each link. Link encryption does not protect data from start to finish through the entire network, as it only encrypts the data while it is in transit, and decrypts it at each node. Link encryption does not improve the efficiency of the transmission, as it adds overhead and latency to the communication. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 419.
Which of the following is the BEST way to verify the integrity of a software patch?
Cryptographic checksums
Version numbering
Automatic updates
Vendor assurance
The best way to verify the integrity of a software patch is to use cryptographic checksums. Cryptographic checksums are mathematical values that are computed from the data in the software patch using a hash function or an algorithm. Cryptographic checksums can be used to compare the original and the downloaded or installed version of the software patch, and to detect any alteration, corruption, or tampering of the data. Cryptographic checksums are also known as hashes, digests, or fingerprints, and they are often provided by the software vendor along with the software patch12. References: 1: What is a Checksum and How to Calculate a Checksum32: How to Verify File Integrity Using Hashes
An auditor carrying out a compliance audit requests passwords that are encrypted in the system to verify that the passwords are compliant with policy. Which of the following is the BEST response to the auditor?
Provide the encrypted passwords and analysis tools to the auditor for analysis.
Analyze the encrypted passwords for the auditor and show them the results.
Demonstrate that non-compliant passwords cannot be created in the system.
Demonstrate that non-compliant passwords cannot be encrypted in the system.
The best response to the auditor is to demonstrate that the system enforces the password policy and does not allow non-compliant passwords to be created. This way, the auditor can verify the compliance without compromising the confidentiality or integrity of the encrypted passwords. Providing the encrypted passwords and analysis tools to the auditor (A) may expose the passwords to unauthorized access or modification. Analyzing the encrypted passwords for the auditor and showing them the results (B) may not be sufficient to convince the auditor of the compliance, as the results could be manipulated or falsified. Demonstrating that non-compliant passwords cannot be encrypted in the system (D) is not a valid response, as encryption does not depend on the compliance of the passwords. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 241; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 303.
An organization is designing a large enterprise-wide document repository system. They plan to have several different classification level areas with increasing levels of controls. The BEST way to ensure document confidentiality in the repository is to
encrypt the contents of the repository and document any exceptions to that requirement.
utilize Intrusion Detection System (IDS) set drop connections if too many requests for documents are detected.
keep individuals with access to high security areas from saving those documents into lower security areas.
require individuals with access to the system to sign Non-Disclosure Agreements (NDA).
The best way to ensure document confidentiality in the repository is to encrypt the contents of the repository and document any exceptions to that requirement. Encryption is the process of transforming the information into an unreadable form using a secret key or algorithm. Encryption protects the confidentiality of the information by preventing unauthorized access or disclosure, even if the repository is compromised or breached. Encryption also provides integrity and authenticity of the information by ensuring that it has not been modified or tampered with. Documenting any exceptions to the encryption requirement is also important to justify the reasons and risks for not encrypting certain information, and to apply alternative controls if needed93. References: 9: What Is a Document Repository and What Are the Benefits of Using One103: What is a document repository and why you should have one11
What is the ultimate objective of information classification?
To assign responsibility for mitigating the risk to vulnerable systems
To ensure that information assets receive an appropriate level of protection
To recognize that the value of any item of information may change over time
To recognize the optimal number of classification categories and the benefits to be gained from their use
The ultimate objective of information classification is to ensure that information assets receive an appropriate level of protection in accordance with their importance and sensitivity to the organization. Information classification is the process of assigning labels or categories to information based on criteria such as confidentiality, integrity, availability, and value. Information classification helps the organization to identify the risks and threats to the information, and to apply the necessary controls and safeguards to protect it. Information classification also helps the organization to comply with the legal, regulatory, and contractual obligations related to the information12. References: 1: Information Classification - Why it matters?32: ISO 27001 & Information Classification: Free 4-Step Guide4
A software scanner identifies a region within a binary image having high entropy. What does this MOST likely indicate?
Encryption routines
Random number generator
Obfuscated code
Botnet command and control
Obfuscated code is a type of code that is deliberately written or modified to make it difficult to understand or reverse engineer3. Obfuscation techniques can include changing variable names, removing comments, adding irrelevant code, or encrypting parts of the code. Obfuscated code can have high entropy, which means that it has a high degree of randomness or unpredictability4. A software scanner can identify a region within a binary image having high entropy as a possible indication of obfuscated code. Encryption routines, random number generators, and botnet command and control are not necessarily related to obfuscated code, and may not have high entropy. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 4674: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 508.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
An organization plan on purchasing a custom software product developed by a small vendor to support its business model. Which unique consideration should be made part of the contractual agreement potential long-term risks associated with creating this dependency?
A source code escrow clause
Right to request an independent review of the software source code
Due diligence form requesting statements of compliance with security requirements
Access to the technical documentation
A source code escrow clause is a unique consideration that should be made part of the contractual agreement when purchasing a custom software product developed by a small vendor to support the business model. A source code escrow clause is a provision that requires the vendor to deposit the source code of the software product with a trusted third party, who will release it to the customer under certain conditions, such as the vendor’s bankruptcy, insolvency, or failure to provide maintenance or support. A source code escrow clause can help to mitigate the potential long-term risks associated with creating a dependency on a small vendor, such as losing access to the software product, being unable to fix bugs or vulnerabilities, or being unable to modify or update the software product. A right to request an independent review of the software source code, a due diligence form requesting statements of compliance with security requirements, and an access to the technical documentation are not unique considerations, but common ones that should be included in any software acquisition contract. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 65; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 57.
Which of the following is a responsibility of a data steward?
Ensure alignment of the data governance effort to the organization.
Conduct data governance interviews with the organization.
Document data governance requirements.
Ensure that data decisions and impacts are communicated to the organization.
A responsibility of a data steward is to ensure that data decisions and impacts are communicated to the organization. A data steward is a role or a function that is responsible for managing and maintaining the quality and the usability of the data within a specific data domain or a business area, such as finance, marketing, or human resources. A data steward can provide some benefits for data governance, which is the process of establishing and enforcing the policies and standards for the collection, use, storage, and protection of data, such as enhancing the accuracy and the reliability of the data, preventing or detecting errors or inconsistencies, and supporting the audit and the compliance activities. A data steward can perform various tasks or duties, such as:
Ensuring that data decisions and impacts are communicated to the organization is a responsibility of a data steward, as it can help to ensure the transparency and the accountability of the data governance process, as well as to facilitate the coordination and the cooperation of the data governance stakeholders, such as the data owners, the data custodians, the data users, and the data governance team. Ensuring alignment of the data governance effort to the organization, conducting data governance interviews with the organization, and documenting data governance requirements are not responsibilities of a data steward, although they may be related or possible tasks or duties. Ensuring alignment of the data governance effort to the organization is a responsibility of the data governance team, which is a group of experts or advisors who are responsible for defining and implementing the data governance policies and standards, as well as for overseeing and evaluating the data governance process and performance. Conducting data governance interviews with the organization is a task or a technique that can be used by the data governance team, the data steward, or the data auditor, to collect and analyze the information and the feedback about the data governance process and performance, from the data governance stakeholders, such as the data owners, the data custodians, the data users, or the data consumers. Documenting data governance requirements is a task or a technique that can be used by the data governance team, the data owner, or the data user, to specify and describe the needs and the expectations of the data governance process and performance, such as the data quality, the data security, or the data compliance.
What is the second step in the identity and access provisioning lifecycle?
Provisioning
Review
Approval
Revocation
The identity and access provisioning lifecycle is the process of managing the creation, modification, and termination of user accounts and access rights in an organization. The second step in this lifecycle is approval, which means that the identity and access requests must be authorized by the appropriate managers or administrators before they are implemented. Approval ensures that the principle of least privilege is followed and that only authorized users have access to the required resources.
Which of the following is a benefit in implementing an enterprise Identity and Access Management (IAM) solution?
Password requirements are simplified.
Risk associated with orphan accounts is reduced.
Segregation of duties is automatically enforced.
Data confidentiality is increased.
A benefit in implementing an enterprise Identity and Access Management (IAM) solution is that the risk associated with orphan accounts is reduced. An orphan account is an account that belongs to a user who has left the organization or changed roles, but the account has not been deactivated or deleted. An orphan account poses a security risk, as it can be exploited by unauthorized users or attackers to gain access to the system or data. An enterprise IAM solution is a system that manages the identification, authentication, authorization, and provisioning of users and devices across the organization. An enterprise IAM solution can help to reduce the risk associated with orphan accounts by automating the account lifecycle management, such as creating, updating, suspending, or deleting accounts based on the user status, role, or policy. An enterprise IAM solution can also help to monitor and audit the account activity, and to detect and remediate any orphan accounts. Password requirements are simplified, segregation of duties is automatically enforced, and data confidentiality is increased are all possible benefits or features of an enterprise IAM solution, but they are not the best answer to the question. Password requirements are simplified by an enterprise IAM solution that supports single sign-on (SSO) or federated identity management (FIM), which allow the user to access multiple systems or applications with one set of credentials. Segregation of duties is automatically enforced by an enterprise IAM solution that implements role-based access control (RBAC) or attribute-based access control (ABAC), which grant or deny access to resources based on the user role or attributes. Data confidentiality is increased by an enterprise IAM solution that encrypts or masks the sensitive data, or applies data loss prevention (DLP) or digital rights management (DRM) policies to the data.
Which of the following is the BEST way to reduce the impact of an externally sourced flood attack?
Have the service provider block the soiree address.
Have the soiree service provider block the address.
Block the source address at the firewall.
Block all inbound traffic until the flood ends.
The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525.
Unused space in a disk cluster is important in media analysis because it may contain which of the following?
Residual data that has not been overwritten
Hidden viruses and Trojan horses
Information about the File Allocation table (FAT)
Information about patches and upgrades to the system
Unused space in a disk cluster is important in media analysis because it may contain residual data that has not been overwritten. A disk cluster is a fixed-length block of disk space that is used to store files. A file may occupy one or more clusters, depending on its size. If a file is smaller than a cluster, the remaining space in the cluster is called slack space. If a file is deleted, the clusters that were allocated to the file are marked as free or unallocated, but the data in the clusters is not erased. Residual data is the data that remains in the slack space or the unallocated space after a file is created, modified, or deleted. Residual data is important in media analysis because it may contain valuable or sensitive information that can be recovered by using forensic tools or techniques. Residual data may include fragments of previous files, temporary files, cache files, swap files, metadata, passwords, encryption keys, or personal data. Residual data can pose a security risk if the media is reused, recycled, or disposed of without proper sanitization. Hidden viruses and Trojan horses, information about the File Allocation table (FAT), and information about patches and upgrades to the system are not the reasons why unused space in a disk cluster is important in media analysis, although they may be related or relevant concepts. Hidden viruses and Trojan horses are malicious programs that can infect or compromise a system or a network. Hidden viruses and Trojan horses may reside in the unused space in a disk cluster, but they are not the result of file creation, modification, or deletion, and they are not the target of media analysis. Information about the File Allocation table (FAT) is the information that describes how the disk clusters are allocated to the files. Information about the File Allocation table (FAT) is stored in a special area of the disk, not in the unused space in a disk cluster, and it is not the result of file creation, modification, or deletion, and it is not the target of media analysis. Information about patches and upgrades to the system is the information that describes the changes or improvements made to the system software or hardware. Information about patches and upgrades to the system may be stored in the unused space in a disk cluster, but it is not the result of file creation, modification, or deletion, and it is not the target of media analysis.
What does electronic vaulting accomplish?
It protects critical files.
It ensures the fault tolerance of Redundant Array of Independent Disks (RAID) systems
It stripes all database records
It automates the Disaster Recovery Process (DRP)
Section: Security Operations
What is the correct order of steps in an information security assessment?
Place the information security assessment steps on the left next to the numbered boxes on the right in the
correct order.
The correct order of steps in an information security assessment is:
Comprehensive Explanation: An information security assessment is a process of evaluating the security posture of a system, network, or organization. It involves four main steps:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 853; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 791.
Which of the following is the MOST common method of memory protection?
Compartmentalization
Segmentation
Error correction
Virtual Local Area Network (VLAN) tagging
The most common method of memory protection is segmentation. Segmentation is a technique that divides the memory space into logical segments, such as code, data, stack, and heap. Each segment has its own attributes, such as size, location, access rights, and protection level. Segmentation can help to isolate and protect the memory segments from unauthorized or unintended access, modification, or execution, as well as to prevent memory corruption, overflow, or leakage. Compartmentalization, error correction, and VLAN tagging are not methods of memory protection, but of information protection, data protection, and network protection, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 589; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 370.
Who is responsible for the protection of information when it is shared with or provided to other organizations?
Systems owner
Authorizing Official (AO)
Information owner
Security officer
The information owner is the person who has the authority and responsibility for the information within an Information System (IS). The information owner is responsible for the protection of information when it is shared with or provided to other organizations, such as by defining the classification, sensitivity, retention, and disposal of the information, as well as by approving or denying the access requests and periodically reviewing the access rights. The system owner, the authorizing official, and the security officer are not responsible for the protection of information when it is shared with or provided to other organizations, although they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is the BEST reason for writing an information security policy?
To support information security governance
To reduce the number of audit findings
To deter attackers
To implement effective information security controls
The best reason for writing an information security policy is to support information security governance. Information security governance is the process or the framework of establishing and enforcing the policies and standards for the protection and the management of the information and the systems within an organization, as well as for overseeing and evaluating the performance and the effectiveness of the information security program and the information security controls. Information security governance can provide some benefits for security, such as enhancing the visibility and the accountability of the information security program and the information security controls, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. Information security governance can involve various elements and roles, such as:
Writing an information security policy is the best reason for writing an information security policy, as it is the foundation and the core of the information security governance process or framework, and it provides the guidance and the direction for the information security program and the information security controls, as well as for the information security stakeholders. Writing an information security policy can involve various tasks or duties, such as:
To reduce the number of audit findings, to deter attackers, and to implement effective information security controls are not the best reasons for writing an information security policy, although they may be related or possible outcomes or benefits of writing an information security policy. To reduce the number of audit findings is an outcome or a benefit of writing an information security policy, as it implies that the information security policy has helped to improve the performance and the effectiveness of the information security program and the information security controls, as well as to comply with the industry regulations or the best practices, and that the information security policy has supported the audit and the compliance activities, by providing the evidence or the data that can validate or verify the information security program and the information security controls. However, to reduce the number of audit findings is not the best reason for writing an information security policy, as it is not the primary or the most important purpose or objective of writing an information security policy, and it may not be true or applicable for all information security policies.
Which of the following is a direct monetary cost of a security incident?
Morale
Reputation
Equipment
Information
Equipment is a direct monetary cost of a security incident. A direct monetary cost is a cost that can be easily measured and attributed to a specific security incident, such as the cost of repairing or replacing damaged or stolen equipment, the cost of hiring external experts or consultants, the cost of paying fines or penalties, or the cost of compensating the victims or customers. Equipment is a direct monetary cost of a security incident, as the security incident may cause physical or logical damage to the equipment, such as servers, computers, routers, or firewalls, or may result in the loss or theft of the equipment. The cost of equipment can be calculated by estimating the market value, the depreciation value, or the replacement value of the equipment, as well as the cost of installation, configuration, or integration of the equipment. Morale, reputation, and information are not direct monetary costs of a security incident, although they are important and significant costs. Morale is an indirect or intangible cost of a security incident, as it affects the psychological or emotional state of the employees, customers, or stakeholders, and may lead to lower productivity, satisfaction, or loyalty. Reputation is an indirect or intangible cost of a security incident, as it affects the public perception or image of the organization, and may result in loss of trust, confidence, or credibility. Information is an indirect or intangible cost of a security incident, as it affects the value or quality of the data or knowledge of the organization, and may result in loss of confidentiality, integrity, or availability. Indirect or intangible costs are costs that are difficult to measure or quantify, and may have long-term or hidden impacts on the organization.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following sets of controls should allow an investigation if an attack is not blocked by preventive controls or detected by monitoring?
Logging and audit trail controls to enable forensic analysis
Security incident response lessons learned procedures
Security event alert triage done by analysts using a Security Information and Event Management (SIEM) system
Transactional controls focused on fraud prevention
Logging and audit trail controls are designed to record and monitor the activities and events that occur on a system or network. They can provide valuable information for forensic analysis, such as the source, destination, time, and type of an event, the user or process involved, the data or resources accessed or modified, and the outcome or status of the event. Logging and audit trail controls can help identify the cause, scope, impact, and timeline of an attack, as well as the evidence and artifacts left by the attacker. They can also help determine the effectiveness and gaps of the preventive and detective controls, and support the incident response and recovery processes. Logging and audit trail controls should be configured, protected, and reviewed according to the organizational policies and standards, and comply with the legal and regulatory requirements.
What is the GREATEST challenge of an agent-based patch management solution?
Time to gather vulnerability information about the computers in the program
Requires that software be installed, running, and managed on all participating computers
The significant amount of network bandwidth while scanning computers
The consistency of distributing patches to each participating computer
The greatest challenge of an agent-based patch management solution is that it requires that software be installed, running, and managed on all participating computers. Patch management is the process of identifying, acquiring, installing, and verifying patches or updates for software or systems, such as operating systems, applications, or firmware. Patch management can help to fix bugs, improve performance, or enhance security. An agent-based patch management solution is a type of patch management solution that uses software agents or programs that run on each computer that needs to be patched. The agents communicate with a central server that provides the patches or updates, and perform the patching tasks automatically or on demand. The challenge of an agent-based patch management solution is that it requires that software be installed, running, and managed on all participating computers, which can increase the complexity, cost, and overhead of the patch management process. The other options are not the greatest challenges, but rather minor or irrelevant issues. Time to gather vulnerability information about the computers in the program is not a challenge, but rather a benefit, of an agent-based patch management solution, as the agents can scan and report the vulnerability status of the computers faster and more accurately than manual methods. The significant amount of network bandwidth while scanning computers is not a challenge, but rather a drawback, of an agent-less patch management solution, which is a type of patch management solution that does not use software agents, but rather scans the computers remotely from a central server, which can consume more network resources. The consistency of distributing patches to each participating computer is not a challenge, but rather an advantage, of an agent-based patch management solution, as the agents can ensure that the patches are applied uniformly and timely to all computers, without missing or skipping any computers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, p. 434; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 429.
Place in order, from BEST (1) to WORST (4), the following methods to reduce the risk of data remanence on magnetic media.
Comprehensive Explanation: Degaussing is the process of decreasing or eliminating a remnant magnetic field to reduce the risk of data remanence on magnetic media, making it the best method among the options provided. Overwriting involves replacing old data with new data, which can also be effective but not as thorough as degaussing. Destruction refers to physically destroying the media, which is effective but not always practical or environmentally friendly. Deleting is simply removing data pointers and does not actually erase the data from the media, making it the worst option.
References:
Which of the following standards/guidelines requires an Information Security Management System (ISMS) to be defined?
International Organization for Standardization (ISO) 27000 family
Information Technology Infrastructure Library (ITIL)
Payment Card Industry Data Security Standard (PCIDSS)
ISO/IEC 20000
The International Organization for Standardization (ISO) 27000 family of standards/guidelines requires an Information Security Management System (ISMS) to be defined. An ISMS is a systematic approach to managing the security of information assets, such as data, systems, processes, and people. An ISMS includes policies, procedures, controls, and activities that aim to protect the confidentiality, integrity, and availability of information, as well as to comply with the legal and regulatory requirements. The ISO 27000 family provides best practices and guidance for establishing, implementing, maintaining, and improving an ISMS. The ISO 27001 standard specifies the requirements for an ISMS, while the other standards in the family provide more detailed or specific guidance on different aspects of information security34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 23; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 25.
How does Encapsulating Security Payload (ESP) in transport mode affect the Internet Protocol (IP)?
Encrypts and optionally authenticates the IP header, but not the IP payload
Encrypts and optionally authenticates the IP payload, but not the IP header
Authenticates the IP payload and selected portions of the IP header
Encrypts and optionally authenticates the complete IP packet
Encapsulating Security Payload (ESP) in transport mode affects the Internet Protocol (IP) by encrypting and optionally authenticating the IP payload, but not the IP header. ESP is a protocol that provides confidentiality, integrity, and authentication for data transmitted over a network. ESP can operate in two modes: transport mode and tunnel mode. In transport mode, ESP only protects the data or payload of the IP packet, while leaving the IP header intact and visible. This mode is suitable for end-to-end communication between two hosts. In tunnel mode, ESP protects the entire IP packet, including the header and the payload, by encapsulating it within another IP packet. This mode is suitable for gateway-to-gateway or host-to-gateway communication34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 345; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 464.
In order to assure authenticity, which of the following are required?
Confidentiality and authentication
Confidentiality and integrity
Authentication and non-repudiation
Integrity and non-repudiation
According to the CISSP All-in-One Exam Guide2, the things that are required to assure authenticity are authentication and non-repudiation. Authenticity is the property that ensures that the data, information, system, network, or entity is genuine, original, and trustworthy, and is not counterfeit, altered, or impersonated. Authentication is the process of verifying and confirming the identity or the validity of a subject or an entity, such as a user, a device, or a message. Authentication helps to assure authenticity, as it ensures that the subject or the entity is who or what it claims to be, and is not an impostor, a fraud, or a forgery. Non-repudiation is the process of preventing or denying the denial or the dispute of an action or an event, such as the sending, receiving, or signing of a message. Non-repudiation helps to assure authenticity, as it ensures that the subject or the entity cannot deny or reject the authenticity or the validity of the action or the event, and is held accountable and responsible for it. Confidentiality is not the thing that is required to assure authenticity, although it may be a thing that is supported or enhanced by authenticity. Confidentiality is the property that ensures that the data or information is only accessible or disclosed to the authorized parties, and is protected from unauthorized or unintended access or disclosure. Confidentiality may be supported or enhanced by authenticity, as it ensures that the data or information is not accessed or disclosed by the impostors, frauds, or forgeries, and that the data or information is not counterfeit, altered, or impersonated. However, confidentiality is not the thing that is required to assure authenticity, as it does not verify or confirm the identity or the validity of the subject or the entity, or prevent or deny the denial or the dispute of the action or the event. Integrity is not the thing that is required to assure authenticity, although it may be a thing that is supported or enhanced by authenticity. Integrity is the property that ensures that the data or information is accurate, complete, and consistent, and is protected from unauthorized or unintended modification or corruption. Integrity may be supported or enhanced by authenticity, as it ensures that the data or information is not modified or corrupted by the impostors, frauds, or forgeries, and that the data or information is genuine, original, and trustworthy. However, integrity is not the thing that is required to assure authenticity, as it does not verify or confirm the identity or the validity of the subject or the entity, or prevent or deny the denial or the dispute of the action or the event.
Which of the following activities BEST identifies operational problems, security misconfigurations, and malicious attacks?
Policy documentation review
Authentication validation
Periodic log reviews
Interface testing
The activity that best identifies operational problems, security misconfigurations, and malicious attacks is periodic log reviews. Log reviews are the process of examining and analyzing the records of events or activities that occur on a system or network, such as user actions, system errors, security alerts, or network traffic. Periodic log reviews can help to identify operational problems, such as system failures, performance issues, or configuration errors, by detecting anomalies, trends, or patterns in the log data. Periodic log reviews can also help to identify security misconfigurations, such as weak passwords, open ports, or missing patches, by comparing the log data with the security policies, standards, or baselines. Periodic log reviews can also help to identify malicious attacks, such as unauthorized access, data breaches, or denial of service, by recognizing signs of intrusion, compromise, or exploitation in the log data. The other options are not the best activities to identify operational problems, security misconfigurations, and malicious attacks, but rather different types of activities. Policy documentation review is the process of examining and evaluating the documents that define the rules and guidelines for the system or network security, such as policies, procedures, or standards. Policy documentation review can help to ensure the completeness, consistency, and compliance of the security documents, but not to identify the actual problems or attacks. Authentication validation is the process of verifying and confirming the identity and credentials of a user or device that requests access to a system or network, such as passwords, tokens, or certificates. Authentication validation can help to prevent unauthorized access, but not to identify the existing problems or attacks. Interface testing is the process of checking and evaluating the functionality, usability, and reliability of the interfaces between different components or systems, such as modules, applications, or networks. Interface testing can help to ensure the compatibility, interoperability, and integration of the interfaces, but not to identify the problems or attacks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, p. 377; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 405.
The PRIMARY outcome of a certification process is that it provides documented
system weaknesses for remediation.
standards for security assessment, testing, and process evaluation.
interconnected systems and their implemented security controls.
security analyses needed to make a risk-based decision.
The primary outcome of a certification process is that it provides documented security analyses needed to make a risk-based decision. Certification is a process of evaluating and testing the security of a system or product against a set of criteria or standards. Certification provides evidence of the security posture and capabilities of the system or product, as well as the identified vulnerabilities, threats, and risks. Certification helps the decision makers, such as the system owners or accreditors, to determine whether the system or product meets the security requirements and can be authorized to operate in a specific environment12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 455; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 867.
What is a characteristic of Secure Socket Layer (SSL) and Transport Layer Security (TLS)?
SSL and TLS provide a generic channel security mechanism on top of Transmission Control Protocol (TCP).
SSL and TLS provide nonrepudiation by default.
SSL and TLS do not provide security for most routed protocols.
SSL and TLS provide header encapsulation over HyperText Transfer Protocol (HTTP).
SSL and TLS provide a generic channel security mechanism on top of TCP. This means that SSL and TLS are protocols that enable secure communication between two parties over a network, such as the internet, by using encryption, authentication, and integrity mechanisms. SSL and TLS operate at the transport layer of the OSI model, above the TCP protocol, which provides reliable and ordered delivery of data. SSL and TLS can be used to secure various application layer protocols, such as HTTP, SMTP, FTP, and so on. SSL and TLS do not provide nonrepudiation by default, as this is a service that requires digital signatures and certificates to prove the origin and content of a message. SSL and TLS do provide security for most routed protocols, as they can encrypt and authenticate any data that is transmitted over TCP. SSL and TLS do not provide header encapsulation over HTTP, as this is a function of the HTTPS protocol, which is a combination of HTTP and SSL/TLS.
Which of the following is the BEST method to reduce the effectiveness of phishing attacks?
User awareness
Two-factor authentication
Anti-phishing software
Periodic vulnerability scan
According to the CISSP For Dummies4, the best method to reduce the effectiveness of phishing attacks is user awareness. This means that the users should be educated and trained on how to recognize and avoid phishing emails and websites, which are fraudulent attempts to obtain sensitive information or credentials from the users by impersonating legitimate entities or persons. User awareness can help users to identify the common signs and indicators of phishing, such as spoofed sender addresses, misleading links, spelling and grammar errors, urgent or threatening messages, and requests for personal or financial information. User awareness can also help users to follow the best practices and preventive measures to protect themselves from phishing, such as verifying the source and content of the messages, using strong and unique passwords, enabling two-factor authentication, reporting and deleting suspicious messages, and using anti-phishing software and tools. Two-factor authentication is not the best method to reduce the effectiveness of phishing attacks, as it may not prevent the users from falling for phishing in the first place. Two-factor authentication is a security mechanism that requires the users to provide two pieces of evidence to prove their identity, such as a password and a one-time code. However, some phishing attacks may be able to bypass or compromise two-factor authentication, such as by using man-in-the-middle techniques, intercepting the codes, or tricking the users into entering the codes on fake websites. Anti-phishing software is not the best method to reduce the effectiveness of phishing attacks, as it may not detect or block all phishing attempts. Anti-phishing software is a software application that helps the users to identify and avoid phishing emails and websites, by using various methods such as blacklists, whitelists, heuristics, and machine learning. However, anti-phishing software may not be able to keep up with the evolving and sophisticated techniques of phishing, such as using encryption, obfuscation, or personalization. Anti-phishing software may also generate false positives or negatives, which may confuse or mislead the users. Periodic vulnerability scan is not the best method to reduce the effectiveness of phishing attacks, as it may not address the human factor of phishing. Periodic vulnerability scan is a process that scans and tests the network, systems, and applications for potential weaknesses and exposures that may be exploited by attackers. However, phishing attacks mainly target the users, not the technical vulnerabilities, by exploiting their emotions, curiosity, or trust. Periodic vulnerability scan may not be able to prevent or detect phishing attacks, unless they are combined with user awareness and education. References: 4
Data remanence refers to which of the following?
The remaining photons left in a fiber optic cable after a secure transmission.
The retention period required by law or regulation.
The magnetic flux created when removing the network connection from a server or personal computer.
The residual information left on magnetic storage media after a deletion or erasure.
Data remanence refers to the residual information left on magnetic storage media after a deletion or erasure. Data remanence is a security risk, as it may allow unauthorized or malicious parties to recover the deleted or erased data, which may contain sensitive or confidential information. Data remanence can be caused by the physical properties of the magnetic storage media, such as hard disks, floppy disks, or tapes, which may retain some traces of the data even after it is overwritten or formatted. Data remanence can also be caused by the logical properties of the file systems or operating systems, which may not delete or erase the data completely, but only mark the space as available or remove the pointers to the data. Data remanence can be prevented or reduced by using secure deletion or erasure methods, such as cryptographic wiping, degaussing, or physical destruction56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 443; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 855.
Which of the following restricts the ability of an individual to carry out all the steps of a particular process?
Job rotation
Separation of duties
Least privilege
Mandatory vacations
According to the CISSP For Dummies3, the concept that restricts the ability of an individual to carry out all the steps of a particular process is separation of duties. Separation of duties is a security principle that divides the tasks and responsibilities of a process among different individuals or roles, so that no one person or role has complete control or authority over the process. Separation of duties helps to prevent or detect fraud, errors, abuse, or collusion, by requiring multiple approvals, checks, or verifications for each step of the process. Separation of duties also helps to enforce the principle of least privilege, which states that users and processes should only have the minimum access required to perform their tasks, and no more. Job rotation is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a technique that supports separation of duties. Job rotation is a security practice that requires the individuals or roles to periodically switch or rotate their tasks and responsibilities, so that no one person or role performs the same task or responsibility for a long period of time. Job rotation helps to prevent or detect fraud, errors, abuse, or collusion, by exposing the activities and performance of each individual or role to different perspectives and evaluations. Job rotation also helps to reduce the risk of insider threats, by limiting the opportunity and familiarity of each individual or role with the tasks and responsibilities. Least privilege is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a principle that supports separation of duties. Least privilege is a security principle that states that users and processes should only have the minimum access required to perform their tasks, and no more. Least privilege helps to prevent or limit unauthorized or malicious actions, as well as the impact of potential incidents, by reducing the access rights and permissions of each user and process. Mandatory vacations is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a technique that supports separation of duties. Mandatory vacations is a security practice that requires the individuals or roles to take a mandatory leave of absence from their tasks and responsibilities for a certain period of time, so that no one person or role performs the same task or responsibility continuously. Mandatory vacations helps to prevent or detect fraud, errors, abuse, or collusion, by allowing the activities and performance of each individual or role to be reviewed and audited by others during their absence. Mandatory vacations also helps to reduce the risk of insider threats, by disrupting the routine and plans of each individual or role with the tasks and responsibilities. References: 3
Which of the following is the PRIMARY reason to perform regular vulnerability scanning of an organization network?
Provide vulnerability reports to management.
Validate vulnerability remediation activities.
Prevent attackers from discovering vulnerabilities.
Remediate known vulnerabilities.
According to the CISSP Official (ISC)2 Practice Tests, the primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities. A vulnerability scanning is the process of identifying and measuring the weaknesses and exposures in a system, network, or application, that may be exploited by threats and cause harm to the organization or its assets. A vulnerability scanning can be performed by using various tools, techniques, or methods, such as automated scanners, manual tests, or penetration tests. The primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities, which means to fix, mitigate, or eliminate the vulnerabilities that are discovered or reported by the vulnerability scanning. Remediation of known vulnerabilities helps to improve the security posture and effectiveness of the system, network, or application, as well as to reduce the overall risk to an acceptable level. Providing vulnerability reports to management is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Vulnerability reports are the documents that provide the evidence and analysis of the vulnerability scanning, such as the scope, objectives, methods, results, and recommendations of the vulnerability scanning. Vulnerability reports help to communicate and document the findings and issues of the vulnerability scanning, as well as to support the decision making and planning for the remediation of the vulnerabilities. Validating vulnerability remediation activities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a part or step of it. Validating vulnerability remediation activities is the process of verifying and testing the effectiveness and completeness of the remediation actions that are taken to address the vulnerabilities, such as patching, updating, configuring, or replacing the system, network, or application components. Validating vulnerability remediation activities helps to ensure that the vulnerabilities are properly and successfully remediated, and that no new or residual vulnerabilities are introduced or left behind. Preventing attackers from discovering vulnerabilities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Preventing attackers from discovering vulnerabilities is the process of hiding or obscuring the vulnerabilities from the potential attackers, by using various techniques or methods, such as encryption, obfuscation, or deception. Preventing attackers from discovering vulnerabilities helps to reduce the likelihood and opportunity of the attackers to exploit the vulnerabilities, but it does not address the root cause or the impact of the vulnerabilities.
A vulnerability in which of the following components would be MOST difficult to detect?
Kernel
Shared libraries
Hardware
System application
According to the CISSP CBK Official Study Guide, a vulnerability in hardware would be the most difficult to detect. A vulnerability is a weakness or exposure in a system, network, or application, which may be exploited by threats and cause harm to the organization or its assets. A vulnerability can exist in various components of a system, network, or application, such as the kernel, the shared libraries, the hardware, or the system application. A vulnerability in hardware would be the most difficult to detect, as it may require physical access, specialized tools, or advanced skills to identify and measure the vulnerability. Hardware is the physical or tangible component of a system, network, or application that provides the basic functionality, performance, and support for the system, network, or application, such as the processor, memory, disk, or network card. Hardware may have vulnerabilities due to design flaws, manufacturing defects, configuration errors, or physical damage. A vulnerability in hardware may affect the security, reliability, or availability of the system, network, or application, such as causing data leakage, performance degradation, or system failure. A vulnerability in the kernel would not be the most difficult to detect, although it may be a difficult to detect. The kernel is the core or central component of a system, network, or application that provides the basic functionality, performance, and control for the system, network, or application, such as the operating system, the hypervisor, or the firmware. The kernel may have vulnerabilities due to design flaws, coding errors, configuration errors, or malicious modifications. A vulnerability in the kernel may affect the security, reliability, or availability of the system, network, or application, such as causing privilege escalation, system compromise, or system crash. A vulnerability in the kernel may be detected by using various tools, techniques, or methods, such as code analysis, vulnerability scanning, or penetration testing. A vulnerability in the shared libraries would not be the most difficult to detect, although it may be a difficult to detect. The shared libraries are the reusable or common components of a system, network, or application, that provide the functionality, performance, and compatibility for the system, network, or application, such as the dynamic link libraries, the application programming interfaces, or the frameworks.
Which of the following is the BEST example of weak management commitment to the protection of security assets and resources?
poor governance over security processes and procedures
immature security controls and procedures
variances against regulatory requirements
unanticipated increases in security incidents and threats
The best example of weak management commitment to the protection of security assets and resources is poor governance over security processes and procedures. Governance is the set of policies, roles, responsibilities, and processes that guide, direct, and control how an organization’s business divisions and IT teams cooperate to achieve business goals. Management commitment is essential for effective governance, as it demonstrates the leadership and support for security initiatives and activities. Poor governance indicates that management does not prioritize security, allocate sufficient resources, enforce accountability, or monitor performance. The other options are not examples of weak management commitment, but rather possible consequences or indicators of poor security practices. Immature security controls and procedures, variances against regulatory requirements, and unanticipated increases in security incidents and threats are all signs that security is not well-managed or implemented, but they do not necessarily reflect the level of management commitment. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, p. 9.
The MAIN reason an organization conducts a security authorization process is to
force the organization to make conscious risk decisions.
assure the effectiveness of security controls.
assure the correct security organization exists.
force the organization to enlist management support.
The main reason an organization conducts a security authorization process is to force the organization to make conscious risk decisions. A security authorization process is a process that evaluates and approves the security of an information system or a product before it is deployed or used. A security authorization process involves three steps: security categorization, security assessment, and security authorization. Security categorization is the step of determining the impact level of the information system or product on the confidentiality, integrity, and availability of the information and assets. Security assessment is the step of testing and verifying the security controls and measures implemented on the information system or product. Security authorization is the step of granting or denying the permission to operate or use the information system or product based on the security assessment results and the risk acceptance criteria. The security authorization process forces the organization to make conscious risk decisions, as it requires the organization to identify, analyze, and evaluate the risks associated with the information system or product, and to decide whether to accept, reject, mitigate, or transfer the risks. The other options are not the main reasons, but rather the benefits or outcomes of a security authorization process. Assuring the effectiveness of security controls is a benefit of a security authorization process, as it provides an objective and independent evaluation of the security controls and measures. Assuring the correct security organization exists is an outcome of a security authorization process, as it establishes the roles and responsibilities of the security personnel and stakeholders. Forcing the organization to enlist management support is an outcome of a security authorization process, as it involves the management in the risk decision making and approval process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, p. 419; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, p. 150.
Which of the following information MUST be provided for user account provisioning?
Full name
Unique identifier
Security question
Date of birth
According to the CISSP CBK Official Study Guide1, the information that must be provided for user account provisioning is the unique identifier. User account provisioning is the process of creating, managing, and deleting user accounts or identities in the system or the network, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the system or the network. User account provisioning helps to ensure the security or the integrity of the system or the network, as well as the resources, data, or information that are accessed or used by the user accounts or identities, by enforcing or implementing the principles or the concepts of the identification, authentication, authorization, or accountability of the user accounts or identities. The information that must be provided for user account provisioning is the unique identifier, as it is the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network, such as the username, the email address, or the employee number of the user account or identity. The unique identifier helps to ensure the security or the integrity of the system or the network, as well as the resources, data, or information that are accessed or used by the user account or identity, by preventing or avoiding the duplication, confusion, or collision of the user account or identity with other user accounts or identities in the system or the network, which may lead to the attacks or threats that may compromise or harm the system or the network, such as the impersonation, spoofing, or masquerading of the user account or identity. Full name is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Full name is the information that consists of the first name, middle name, and last name of the user account or identity, which is used or applied to represent or display the user account or identity in the system or the network, such as the John Smith, Jane Doe, or Alice Cooper of the user account or identity. Full name helps to provide a more human or personal touch or factor to the user account or identity, as well as to facilitate or enhance the communication or the interaction of the user account or identity with other user accounts or identities in the system or the network. Full name may be a benefit or a consequence of providing the unique identifier, as the unique identifier may be derived or generated from the full name, or the full name may be associated or linked with the unique identifier, of the user account or identity. However, full name is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. Security question is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Security question is the information that consists of a question and an answer that are related or relevant to the user account or identity, which are used or applied to verify or confirm the user account or identity in the system or the network, such as the What is your mother’s maiden name?, What is your favorite color?, or What is the name of your first pet? of the user account or identity. Security question helps to provide an additional layer or level of security or protection to the user account or identity, as well as to facilitate or enhance the recovery or the reset of the user account or identity in the system or the network, in the event of the loss, forgetfulness, or compromise of the user account or identity, such as the password, username, or email address of the user account or identity. Security question may be a benefit or a consequence of providing the unique identifier, as the security question may be derived or generated from the unique identifier, or the security question may be associated or linked with the unique identifier, of the user account or identity. However, security question is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. Date of birth is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Date of birth is the information that consists of the day, month, and year of the birth of the user account or identity, which is used or applied to represent or display the age or the birthday of the user account or identity in the system or the network, such as the 01/01/2000, 31/12/1999, or 29/02/2000 of the user account or identity. Date of birth helps to provide a more human or personal touch or factor to the user account or identity, as well as to facilitate or enhance the communication or the interaction of the user account or identity with other user accounts or identities in the system or the network. Date of birth may be a benefit or a consequence of providing the unique identifier, as the date of birth may be derived or generated from the unique identifier, or the date of birth may be associated or linked with the unique identifier, of the user account or identity. However, date of birth is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. References: 1
A health care provider is considering Internet access for their employees and patients. Which of the following is the organization's MOST secure solution for protection of data?
Public Key Infrastructure (PKI) and digital signatures
Trusted server certificates and passphrases
User ID and password
Asymmetric encryption and User ID
Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating, managing, distributing, using, storing, and revoking digital certificates and public keys. Digital signatures are a type of electronic signature that use public key cryptography to verify the authenticity and integrity of a message or document. A health care provider that is considering Internet access for their employees and patients should use PKI and digital signatures as the most secure solution for protection of data, because they provide confidentiality, integrity, authentication, non-repudiation, and accountability for the data exchanged over the Internet. The other options are not as secure as PKI and digital signatures, because they do not provide all the security services or they rely on weaker forms of encryption or authentication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 211; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 178
As a best practice, the Security Assessment Report (SAR) should include which of the following sections?
Data classification policy
Software and hardware inventory
Remediation recommendations
Names of participants
A Security Assessment Report (SAR) is a document that summarizes the findings and recommendations of a security assessment. A SAR should include the following sections: an executive summary, a scope and methodology, a threat and risk analysis, a vulnerability analysis, a security control assessment, and remediation recommendations. Remediation recommendations are the best practices that the SAR should include, as they provide the actions that need to be taken to address the identified security gaps and risks. Data classification policy, software and hardware inventory, and names of participants are not essential sections of a SAR, although they may be included as supporting information or appendices.
An application developer is deciding on the amount of idle session time that the application allows before a timeout. The BEST reason for determining the session timeout requirement is
organization policy.
industry best practices.
industry laws and regulations.
management feedback.
The session timeout requirement is the maximum amount of time that a user can be inactive on an application before the session is terminated and the user is required to re-authenticate. The best reason for determining the session timeout requirement is the organization policy, as it reflects the organization’s risk appetite, security objectives, and compliance obligations. The organization policy should specify the appropriate session timeout value for different types of applications and data, based on their sensitivity and criticality12. References:
What operations role is responsible for protecting the enterprise from corrupt or contaminated media?
Information security practitioner
Information librarian
Computer operator
Network administrator
According to the CISSP CBK Official Study Guide1, an information librarian is responsible for managing, maintaining, and protecting the organization’s knowledge resources, including ensuring that media (such as hard drives, USBs, CDs) are free from corruption or contamination to protect the enterprise’s data integrity. An information librarian is also responsible for cataloging, indexing, and classifying the media, as well as providing access and retrieval services to the authorized users. An information librarian may also perform backup, recovery, and disposal of the media, as well as monitor and audit the usage and security of the media. An information security practitioner is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in defining and enforcing the policies and standards for the media security. An information security practitioner is a general term for a person who performs various functions and tasks related to the information security of the organization, such as planning, designing, implementing, testing, operating, or auditing the information security systems and controls. An information security practitioner may also provide guidance, advice, and training to the other roles and stakeholders on the information security matters. A computer operator is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in using and handling the media. A computer operator is a person who operates and controls the computer systems and devices of the organization, such as the servers, workstations, printers, or scanners. A computer operator may also perform tasks such as loading and unloading the media, running and monitoring the programs and applications, troubleshooting and resolving the errors and problems, and reporting and documenting the activities and incidents. A network administrator is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in configuring and connecting the media. A network administrator is a person who administers and manages the network systems and devices of the organization, such as the routers, switches, firewalls, or wireless access points. A network administrator may also perform tasks such as installing and updating the network software and hardware, setting and maintaining the network parameters and security, optimizing and troubleshooting the network performance and availability, and supporting and assisting the network users and clients. References: 1
Which of the following is the MOST effective method of mitigating data theft from an active user workstation?
Implement full-disk encryption
Enable multifactor authentication
Deploy file integrity checkers
Disable use of portable devices
The most effective method of mitigating data theft from an active user workstation is to disable use of portable devices. Portable devices are the devices that can be easily connected to or disconnected from a workstation, such as USB drives, external hard drives, flash drives, or smartphones. Portable devices can pose a risk of data theft from an active user workstation, as they can be used to copy, transfer, or exfiltrate data from the workstation, either by malicious insiders or by unauthorized outsiders. By disabling use of portable devices, the data theft from an active user workstation can be prevented or reduced.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 330; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 291
Which of the following is the MOST important output from a mobile application threat modeling exercise according to Open Web Application Security Project (OWASP)?
Application interface entry and endpoints
The likelihood and impact of a vulnerability
Countermeasures and mitigations for vulnerabilities
A data flow diagram for the application and attack surface analysis
The most important output from a mobile application threat modeling exercise according to OWASP is a data flow diagram for the application and attack surface analysis. A data flow diagram is a graphical representation of the data flows and processes within the application, as well as the external entities and boundaries that interact with the application. An attack surface analysis is a systematic evaluation of the potential vulnerabilities and threats that can affect the application, based on the data flow diagram and other sources of information. These two outputs can help identify and prioritize the security risks and requirements for the mobile application, as well as the countermeasures and mitigations for the vulnerabilities.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 487; [Official
Retaining system logs for six months or longer can be valuable for what activities?
Disaster recovery and business continuity
Forensics and incident response
Identity and authorization management
Physical and logical access control
Retaining system logs for six months or longer can be valuable for forensics and incident response activities. System logs are records of events that occur on a system, such as user actions, system errors, security alerts, network traffic, etc. System logs can provide useful evidence and information for investigating and analyzing security incidents, such as the source, scope, impact, and timeline of the incident, as well as the potential vulnerabilities, threats, and attackers involved. System logs can also help with incident recovery and remediation, as well as with improving security controls and policies12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 437; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 849.
If compromised, which of the following would lead to the exploitation of multiple virtual machines?
Virtual device drivers
Virtual machine monitor
Virtual machine instance
Virtual machine file system
If compromised, the virtual machine monitor would lead to the exploitation of multiple virtual machines. The virtual machine monitor, also known as the hypervisor, is the software layer that creates and manages the virtual machines on a physical host. The virtual machine monitor controls the allocation and distribution of the hardware resources, such as CPU, memory, disk, and network, among the virtual machines. The virtual machine monitor also provides the isolation and separation of the virtual machines from each other and from the physical host. If the virtual machine monitor is compromised, the attacker can gain access to all the virtual machines and their data, as well as the physical host and its resources.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 269; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 234
In which identity management process is the subject’s identity established?
Trust
Provisioning
Authorization
Enrollment
According to the CISSP CBK Official Study Guide1, the identity management process in which the subject’s identity is established is enrollment. Enrollment is the process of registering or enrolling a subject into an identity management system, such as a user into an authentication system, or a device into a network. Enrollment is the process in which the subject’s identity is established, as it involves verifying and validating the subject’s identity, as well as collecting and storing the subject’s identity attributes, such as the name, email, or biometrics of the subject. Enrollment also involves issuing and assigning the subject’s identity credentials, such as the username, password, or certificate of the subject. Enrollment helps to create and maintain the subject’s identity record or profile, as well as to enable and facilitate the subject’s access and use of the system or network. Trust is not the identity management process in which the subject’s identity is established, although it may be a factor that influences the enrollment process. Trust is the degree of confidence or assurance that a subject or an entity has in another subject or entity, such as a user in a system, or a system in a network. Trust may influence the enrollment process, as it may determine the level or extent of the identity verification and validation, as well as the identity attributes and credentials that are required or provided for the enrollment process. Provisioning is not the identity management process in which the subject’s identity is established, although it may be a process that follows or depends on the enrollment process. Provisioning is the process of creating, assigning, and configuring a subject’s account or resource with the necessary access rights and permissions to perform the tasks and functions that are required by the subject’s role and responsibilities, as well as the security policies and standards of the system or network. Provisioning is not the process in which the subject’s identity is established, as it does not involve verifying and validating the subject’s identity, or collecting and storing the subject’s identity attributes or credentials. Authorization is not the identity management process in which the subject’s identity is established, although it may be a process that follows or depends on the enrollment process. Authorization is the process of granting or denying a subject’s access or use of an object or a resource, based on the subject’s identity, role, or credentials, as well as the security policies and rules of the system or network. Authorization is not the process in which the subject’s identity is established, as it does not involve verifying and validating the subject’s identity, or collecting and storing the subject’s identity attributes or credentials. References: 1
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
An organization recently suffered from a web-application attack that resulted in stolen user session cookie information. The attacker was able to obtain the information
when a user’s browser executed a script upon visiting a compromised website. What type of attack MOST likely occurred?
Cross-Site Scripting (XSS)
Extensible Markup Language (XML) external entities
SQL injection (SQLI)
Cross-Site Request Forgery (CSRF)
Cross-Site Scripting (XSS) is a type of web-application attack that results in stolen user session cookie information, when a user’s browser executes a script upon visiting a compromised website. XSS occurs when an attacker injects malicious code, usually in the form of a script, into a web page or application that is viewed by other users. The script can then access or manipulate the user’s browser, session, or data, such as cookies, credentials, or personal information. XSS can be classified into three types: reflected, stored, and DOM-based. Reflected XSS occurs when the script is embedded in a URL or a form input that is reflected back to the user by the web server. Stored XSS occurs when the script is stored in a database or a file on the web server, and is displayed to the user when they visit a specific page or application. DOM-based XSS occurs when the script is executed by the user’s browser due to a modification of the Document Object Model (DOM) of the web page or application34. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 899; 100 CISSP Questions, Answers and Explanations, Question 19.
Which of the following types of firewall only examines the “handshaking” between packets before forwarding traffic?
Proxy firewalls
Host-based firewalls
Circuit-level firewalls
Network Address Translation (NAT) firewalls
Circuit-level firewalls are a type of firewall that only examines the “handshaking” between packets before forwarding traffic. Circuit-level firewalls operate at the transport layer of the OSI model, and they establish a virtual circuit or session between the source and the destination hosts. Circuit-level firewalls do not inspect the content or the header of the packets, but they only verify that the packets belong to a valid and established session. Circuit-level firewalls are faster and less resource-intensive than other types of firewalls, but they provide less security and visibility. The other options are not correct. Proxy firewalls are a type of firewall that act as an intermediary between the source and the destination hosts, and they inspect and filter the packets at the application layer of the OSI model. Host-based firewalls are a type of firewall that are installed and configured on individual hosts, and they protect the hosts from incoming and outgoing network traffic. Network Address Translation (NAT) firewalls are a type of firewall that modify the source or the destination IP addresses of the packets, and they provide a layer of obfuscation and security for the internal network hosts. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Communication and Network Security, page 589. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Communication and Network Security, page 590.
In a multi-tenant cloud environment, what approach will secure logical access to assets?
Hybrid cloud
Transparency/Auditability of administrative access
Controlled configuration management (CM)
Virtual private cloud (VPC)
A virtual private cloud (VPC) is an approach that will secure logical access to assets in a multi-tenant cloud environment. A VPC is a segment of a public cloud that is isolated and dedicated to a specific customer or tenant. A VPC enables the customer to have more control and security over their cloud resources, such as compute, storage, or network. A VPC can also be connected to the customer’s on-premises network or other VPCs through a secure VPN tunnel or a dedicated connection. A VPC can prevent unauthorized or malicious access to the customer’s assets from other tenants or external parties. A hybrid cloud is a combination of public and private clouds that are integrated and interoperable. A hybrid cloud does not necessarily secure logical access to assets in a multi-tenant cloud environment, as it depends on the security measures and controls implemented by the cloud providers and the customer. Transparency/auditability of administrative access is a principle or a practice that requires the cloud provider to disclose and document the access and actions of their administrators on the customer’s cloud resources. Transparency/auditability of administrative access does not secure logical access to assets in a multi-tenant cloud environment, as it does not prevent or restrict the access, but rather monitors and reports it. Controlled configuration management (CM) is a process or a function that ensures the consistency and integrity of the cloud resources and their configurations. Controlled CM does not secure logical access to assets in a multi-tenant cloud environment, as it does not address the access control or the isolation of the cloud resources. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 202.
Change management policies and procedures belong to which of the following types of controls?
Directive
Detective
Corrective
Preventative
Change management policies and procedures belong to the type of controls that are directive. Controls are the measures and the mechanisms that are used to protect and safeguard the organization’s information systems and assets, and to ensure that they comply with the organization’s security and business objectives. Controls can be classified into different types, based on their purpose, function, or nature, such as preventive, detective, corrective, deterrent, compensating, or recovery controls. Directive controls are the type of controls that guide and regulate the actions and the behaviors of the organization’s staff, processes, and systems, and that ensure that they follow the organization’s policies, standards, and regulations. Directive controls can include policies, procedures, guidelines, standards, rules, regulations, laws, or contracts. Change management policies and procedures belong to the type of controls that are directive, as they provide the instructions and the requirements for managing and controlling any changes to the organization’s information systems and assets, and for ensuring that the changes align with the organization’s security and business requirements. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 18. Free daily CISSP practice questions, Question 4.
What is the BEST method if an investigator wishes to analyze a hard drive which may be used as evidence?
Leave the hard drive in place and use only verified and authenticated Operating Systems (OS) utilities ...
Log into the system and immediately make a copy of all relevant files to a Write Once, Read Many ...
Remove the hard drive from the system and make a copy of the hard drive's contents using imaging hardware.
Use a separate bootable device to make a copy of the hard drive before booting the system and analyzing the hard drive.
The best method if an investigator wishes to analyze a hard drive which may be used as evidence is to remove the hard drive from the system and make a copy of the hard drive’s contents using imaging hardware. Imaging hardware is a device that can create a bit-by-bit copy of the hard drive’s contents, including the deleted or hidden files, without altering or damaging the original hard drive. Imaging hardware can also verify the integrity of the copy by generating and comparing the hash values of the original and the copy. The copy can then be used for analysis, while the original can be preserved and stored in a secure location. This method can help to ensure the authenticity, reliability, and admissibility of the hard drive as evidence, as well as to prevent any potential tampering or contamination of the hard drive. Leaving the hard drive in place and using only verified and authenticated Operating Systems (OS) utilities, logging into the system and immediately making a copy of all relevant files to a Write Once, Read Many, or using a separate bootable device to make a copy of the hard drive before booting the system and analyzing the hard drive are not the best methods if an investigator wishes to analyze a hard drive which may be used as evidence, as they are either risky, incomplete, or inefficient methods that may compromise the integrity, validity, or quality of the hard drive as evidence. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 19: Digital Forensics, page 1049; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.15, page 306.
Which of the following are the B EST characteristics of security metrics?
They are generalized and provide a broad overview
They use acronyms and abbreviations to be concise
They use bar charts and Venn diagrams
They are consistently measured and quantitatively expressed
Security metrics are measurements that are used to evaluate and improve the effectiveness and efficiency of security processes, controls, and outcomes. The best characteristics of security metrics are that they are consistently measured and quantitatively expressed, as this ensures that the metrics are objective, reliable, comparable, and verifiable. Security metrics should not be generalized or provide a broad overview, as this may reduce their accuracy, relevance, and usefulness. Security metrics should not use acronyms and abbreviations to be concise, as this may cause confusion, ambiguity, or misunderstanding. Security metrics may use bar charts and Venn diagrams, or other graphical or visual representations, to illustrate or communicate the results, but this is not a characteristic of the metrics themselves, but rather a presentation technique.
In an IDEAL encryption system, who has sole access to the decryption key?
System owner
Data owner
Data custodian
System administrator
In an ideal encryption system, the data owner should have sole access to the decryption key, as the data owner is the person or entity that has the ultimate authority and responsibility over the data. The data owner should be able to control who can access, modify, or delete the data, and should be able to revoke or grant access rights as needed. The data owner should also be accountable for the security and compliance of the data. The system owner, the data custodian, and the system administrator are not the ideal candidates to have sole access to the decryption key, as they may not have the same level of authority, responsibility, or accountability over the data as the data owner. The system owner is the person or entity that owns the system that processes or stores the data, but may not have the same interest or knowledge of the data as the data owner. The data custodian is the person or entity that implements the security controls and procedures for the data, as defined by the data owner, but may not have the same rights or privileges to access the data as the data owner. The system administrator is the person or entity that manages the system that processes or stores the data, but may not have the same obligations or liabilities for the data as the data owner. References:
Individual access to a network is BEST determined based on
risk matrix.
value of the data.
business need.
data classification.
Access to a network is the ability or permission of a user or device to connect to or communicate with the network and its resources, such as servers, applications, or data. Access to a network should be controlled and restricted based on the principle of least privilege, which states that a user or device should only have the minimum level of access that is necessary to perform their legitimate tasks or functions. Therefore, the best way to determine the individual access to a network is based on the business need. The business need is the justification or rationale for the user or device to access the network and its resources, based on their role, responsibility, or function within the organization. The business need can help to define the access criteria, rules, or policies for the user or device, as well as to monitor, review, or revoke the access when needed. The business need can also help to balance the security and usability of the network and its resources, and to align the access control with the organization’s objectives and needs. Risk matrix, value of the data, or data classification are not the best ways to determine the individual access to a network, as they are more related to risk assessment, data security, or data management. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 14: Access Control, page 831; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.8, page 220.
Which of the following is the BEST statement for a professional to include as port of business continuity (BC) procedure?
A full data backup must be done upon management request.
An incremental data backup must be done upon management request.
A full data backup must be done based on the needs of the business.
In incremental data backup must be done after each system change.
The best statement for a professional to include as part of a business continuity (BC) procedure is that a full data backup must be done based on the needs of the business. A business continuity procedure is a set of steps or actions that should be followed to ensure the continuity of critical business functions and processes in the event of a disruption or disaster. A full data backup is a type of backup that copies all the data from a system or resource to another storage medium, such as a tape, a disk, or a cloud. A full data backup provides the most complete and reliable recovery option, as it restores the system or resource to its original state. A full data backup must be done based on the needs of the business, meaning that it should consider the factors such as the recovery time objective (RTO), the recovery point objective (RPO), the frequency of data changes, the importance of data, the cost of backup, and the available resources. A full data backup must not be done upon management request, as this may not reflect the actual needs of the business, and may result in unnecessary or insufficient backup. An incremental data backup is a type of backup that copies only the data that has changed since the last backup, whether it was a full or an incremental backup. An incremental data backup saves time and space, but it requires more steps and dependencies to restore the system or resource. An incremental data backup must not be done upon management request or after each system change, as this may not meet the needs of the business, and may cause inconsistency or redundancy in the backup. References:
Which of the following needs to be included in order for High Availability (HA) to continue operations during planned system outages?
Redundant hardware, disk spanning, and patching
Load balancing, power reserves, and disk spanning
Backups, clustering, and power reserves
Clustering, load balancing, and fault-tolerant options
High Availability (HA) is a system design goal that ensures the system or network can continue to operate and provide the expected level of service and performance during planned or unplanned outages or disruptions. To achieve HA, the system or network needs to have various components and features that enhance its reliability, availability, and resilience. Some of these components and features are clustering, load balancing, and fault-tolerant options. Clustering is the process of grouping two or more servers or devices together to act as a single system and provide redundancy and scalability. Load balancing is the process of distributing the workload or traffic among multiple servers or devices to optimize the performance and efficiency of the system or network. Fault-tolerant options are the mechanisms or techniques that enable the system or network to detect, isolate, and recover from faults or failures without affecting the service or performance. Clustering, load balancing, and fault-tolerant options can help to achieve HA by ensuring that the system or network can continue to operate and provide the expected level of service and performance during planned system outages, such as maintenance, upgrade, or backup. Redundant hardware, disk spanning, and patching, load balancing, power reserves, and disk spanning, or backups, clustering, and power reserves are not the best components or features to include in order to achieve HA during planned system outages, as they are more related to data security, data protection, or data recovery aspects of HA. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 11: Security Operations, page 678; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 7: Security Operations, Question 7.10, page 274.
Which of the following is TRUE regarding equivalence class testing?
It is characterized by the stateless behavior of a process implemented In a function.
An entire partition can be covered by considering only one representative value from that partition.
Test inputs are obtained from the derived boundaries of the given functional specifications.
It is useful for testing communications protocols and graphical user interfaces.
Equivalence class testing is a software testing technique that divides the input domain of a program into a finite number of equivalence classes, or partitions, based on the expected behavior or output of the program. An equivalence class is a set of inputs that are equivalent in terms of satisfying the same condition or producing the same result. The main idea of equivalence class testing is that an entire partition can be covered by considering only one representative value from that partition, as all the values in the same partition are expected to behave the same way. This can reduce the number of test cases and increase the test coverage and efficiency. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 389; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 529]
What is the PRIMARY purpose of creating and reporting metrics for a security awareness, training, and education program?
Make all stakeholders aware of the program's progress.
Measure the effect of the program on the organization's workforce.
Facilitate supervision of periodic training events.
Comply with legal regulations and document due diligence in security practices.
Metrics are used to evaluate the effectiveness and efficiency of a security awareness, training, and education program. They can help to identify the strengths and weaknesses of the program, the level of knowledge and skills of the workforce, the impact of the program on the organization’s security posture and culture, and the return on investment of the program. Metrics can also help to communicate the value and benefits of the program to the stakeholders, such as management, employees, customers, and regulators12. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 47; CISSP Practice Exam – FREE 20 Questions and Answers, Question 1.
The security organization is looking for a solution that could help them determine with a strong level of confidence that attackers have breached their network. Which solution is MOST effective at discovering a successful network breach?
Deploying a honeypot
Developing a sandbox
Installing an intrusion prevention system (IPS)
Installing an intrusion detection system (IDS)
A honeypot is a decoy system that is designed to attract and trap attackers who attempt to breach a network. A honeypot can provide a high level of confidence that attackers have breached the network, as it can record their activities, techniques, tools, and motives. A honeypot can also divert the attackers from the real network assets and alert the security organization of the intrusion. The other options are not as effective as deploying a honeypot at discovering a successful network breach. Developing a sandbox is a technique that isolates an application or a process from the rest of the system, such as a web browser or an email attachment. A sandbox can prevent malicious code from affecting the system, but it does not necessarily detect or identify the attackers. Installing an intrusion prevention system (IPS) is a technique that monitors and blocks malicious network traffic, such as denial-of-service, reconnaissance, or exploitation attempts. An IPS can prevent potential network breaches, but it may not discover successful ones that bypass or evade the IPS. Installing an intrusion detection system (IDS) is a technique that monitors and alerts on malicious network traffic, such as denial-of-service, reconnaissance, or exploitation attempts. An IDS can detect possible network breaches, but it may not discover successful ones that bypass or evade the IDS, or it may generate false positives that reduce the confidence level. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Communication and Network Security, pp. 311-312, 315-316; CISSP practice exam questions and answers | TechTarget, Question 4
Which of the following security testing strategies is BEST suited for companies with low to moderate security maturity?
Load Testing
White-box testing
Black -box testing
Performance testing
Black-box testing is a security testing strategy that simulates an external attack on a system or application, without any prior knowledge of its internal structure, design, or implementation. Black-box testing is best suited for companies with low to moderate security maturity, as it can reveal the most obvious and common vulnerabilities, such as misconfigurations, default credentials, or unpatched software. Black-box testing can also provide a realistic assessment of the system’s security posture from an attacker’s perspective. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 287; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6: Security Assessment and Testing, page 413]
Which of the following statements BEST distinguishes a stateful packet inspection firewall from a stateless packet filter firewall?
The SPI inspects the flags on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) packets.
The SPI inspects the traffic in the context of a session.
The SPI is capable of dropping packets based on a pre-defined rule set.
The SPI inspects traffic on a packet-by-packet basis.
A stateful packet inspection firewall is a type of firewall that keeps track of the state of network connections, such as TCP sessions or UDP datagrams, and inspects the traffic in the context of a session. This means that the SPI firewall can analyze the packets not only based on the header information, such as source and destination IP addresses, ports, and protocols, but also based on the content and sequence of the packets, such as flags, sequence numbers, and payloads. This allows the SPI firewall to detect and prevent more sophisticated attacks, such as fragmentation attacks, spoofing attacks, and application layer attacks, that a stateless packet filter firewall cannot. A stateless packet filter firewall is a type of firewall that inspects the traffic on a packet-by-packet basis, and only based on the header information. It does not keep track of the state of network connections, and does not examine the content or sequence of the packets. It is faster and simpler than a stateful packet inspection firewall, but also less secure and more vulnerable to attacks34. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 457; 100 CISSP Questions, Answers and Explanations, Question 12.
When conducting a security assessment of access controls , Which activity is port of the data analysis phase?
Collect logs and reports.
Present solutions to address audit exceptions.
Categorize and Identify evidence gathered during the audit
Conduct statiscal sampling of data transactions.
When conducting a security assessment of access controls, categorizing and identifying evidence gathered during the audit is an activity that is part of the data analysis phase. The data analysis phase is the stage of the security assessment process where the auditor examines and evaluates the data collected during the data gathering phase, and compares it with the predefined criteria, standards, and objectives. The data analysis phase involves categorizing and identifying the evidence gathered during the audit, such as logs, reports, records, interviews, observations, and tests, and determining whether they support or contradict the audit findings and conclusions. Collecting logs and reports, presenting solutions to address audit exceptions, and conducting statistical sampling of data transactions are not activities that are part of the data analysis phase, although they may be involved in other phases of the security assessment process. Collecting logs and reports is an activity that is part of the data gathering phase, which is the stage where the auditor obtains and verifies the relevant information and evidence for the audit. Presenting solutions to address audit exceptions is an activity that is part of the reporting phase, which is the stage where the auditor communicates the audit results and recommendations to the stakeholders. Conducting statistical sampling of data transactions is an activity that is part of the planning phase, which is the stage where the auditor defines the scope, objectives, criteria, and methodology of the audit. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 42. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1: Security and Risk Management, page 55.
To monitor the security of buried data lines inside the perimeter of a facility, which of the following is the MOST effective control?
Fencing around the facility with closed-circuit television (CCTV) cameras at all entry points
Ground sensors installed and reporting to a security event management (SEM) system
Steel casing around the facility ingress points
regular sweeps of the perimeter, including manual inspection of the cable ingress points
The most effective control to monitor the security of buried data lines inside the perimeter of a facility is to use ground sensors installed and reporting to a security event management (SEM) system. Ground sensors are devices that detect and measure the physical changes or disturbances in the ground, such as vibration, pressure, or sound, caused by any movement or activity near the buried data lines. Ground sensors can report the detected signals to a security event management system, which is a system that collects, analyzes, and correlates the security events and alerts from various sources, such as sensors, cameras, or logs. A security event management system can help to identify and respond to any unauthorized or malicious attempts to access, tamper, or damage the buried data lines, and to alert the security personnel or authorities34. References: CISSP CBK, Fifth Edition, Chapter 5, page 435; 2024 Pass4itsure CISSP Dumps, Question 14.
An organization is outsourcing its payroll system and is requesting to conduct a full audit on the third-party information technology (IT) systems. During the due diligence process, the third party provides previous audit report on its IT system.
Which of the following MUST be considered by the organization in order for the audit reports to be acceptable?
The audit assessment has been conducted by an independent assessor.
The audit reports have been signed by the third-party senior management.
The audit reports have been issued in the last six months.
The audit assessment has been conducted by an international audit firm.
The most important factor that the organization must consider in order for the audit reports to be acceptable is that the audit assessment has been conducted by an independent assessor. An independent assessor is a person or an entity that has no affiliation or interest with the third party or the organization, and that can perform the audit assessment objectively and impartially. An independent assessor can provide a credible and reliable evaluation of the third party’s information technology (IT) systems, and identify any risks, issues, or gaps that may affect the security, performance, or compliance of the outsourced payroll system. An independent assessor can also verify that the third party’s IT systems meet the organization’s requirements and expectations, and that the third party follows the best practices and standards for IT security and management. The audit reports being signed by the third-party senior management, being issued in the last six months, or being conducted by an international audit firm are not as critical as the audit assessment being conducted by an independent assessor, as they do not guarantee the quality, validity, or relevance of the audit reports, or they may not be applicable or feasible in all cases. References:
What is the benefit of an operating system (OS) feature that is designed to prevent an application from executing code from a non-executable memory region?
Identifies which security patches still need to be installed on the system
Stops memory resident viruses from propagating their payload
Reduces the risk of polymorphic viruses from encrypting their payload
Helps prevent certain exploits that store code in buffers
The benefit of an operating system (OS) feature that is designed to prevent an application from executing code from a non-executable memory region is that it helps prevent certain exploits that store code in buffers. An operating system (OS) is a type of software or program that runs or operates on a system or a device, such as a computer or a smartphone, and that provides the interface or the environment for the user or the customer, and for the other software or applications, to access or use the system or the device, and the resources or the components of the system or the device, such as the processor, the memory, or the disk. An operating system (OS) can also include various features or functions, such as security, performance, or usability, that can enhance or improve the functionality, reliability, or experience of the system or the device, or of the user or the customer, or of the other software or applications. A memory region is a type of area or segment of the memory, which is a type of resource or component of the system or the device, that stores or holds the data or the information, such as the code, that are used or processed by the system or the device, or by the user or the customer, or by the other software or applications. A memory region can be classified into various types, such as executable or non-executable, depending on the permission or the capability of the memory region to run or execute the code that is stored or held in the memory region. An executable memory region is a type of memory region that can run or execute the code that is stored or held in the memory region, such as the code segment, which is a type of memory region that stores or holds the code that is part of the system or the device, or of the other software or applications. A non-executable memory region is a type of memory region that cannot run or execute the code that is stored or held in the memory region, such as the data segment, which is a type of memory region that stores or holds the data or the information that are not part of the system or the device, or of the other software or applications, such as the variables, the constants, or the buffers.
What is the MOST effective way to protect privacy?
Eliminate or reduce collection of personal information.
Encrypt all collected personal information.
Classify all personal information at the highest information classification level.
Apply tokenization to all personal information records.
The most effective way to protect privacy is to eliminate or reduce collection of personal information. Privacy is the right or the ability of an individual or an entity to control or limit the access, use, or disclosure of their personal information, such as name, address, email, phone number, or biometric data. Privacy is an important and fundamental aspect of human dignity, autonomy, and security, and it is protected by various laws, regulations, or standards, such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), or the ISO/IEC 27001. Protecting privacy is the responsibility and the duty of the individuals or the entities that collect, process, store, or share personal information, such as organizations, businesses, or governments. The most effective way to protect privacy is to eliminate or reduce collection of personal information, meaning that the individuals or the entities should only collect the minimum amount or the necessary type of personal information that is required or relevant for the purpose or the function of the service or the product, and that they should not collect any personal information that is excessive, redundant, or irrelevant. By eliminating or reducing collection of personal information, the individuals or the entities can minimize the risk or the impact of privacy breaches, violations, or incidents, such as unauthorized access, disclosure, or misuse of personal information, and they can also comply with the legal or regulatory obligations, the ethical or moral principles, and the best practices or standards for privacy protection. Encrypting all collected personal information, classifying all personal information at the highest information classification level, or applying tokenization to all personal information records are not the most effective ways to protect privacy, as they are either not sufficient or not necessary for privacy protection, or they have other purposes or functions than privacy protection. References:
What is the overall goal of software security testing?
Identifying the key security features of the software
Ensuring all software functions perform as specified
Reducing vulnerabilities within a software system
Making software development more agile
The overall goal of software security testing is to reduce the vulnerabilities within a software system. A software system is a collection of software components, such as applications, programs, or modules, that interact with each other and with other systems, such as hardware, networks, or databases, to perform certain functions or tasks. A vulnerability is a weakness or a flaw in a software system that can be exploited by a threat, such as an attacker, a malware, or an error, to cause harm or damage, such as unauthorized access, data breach, denial of service, or corruption. Software security testing is a process of evaluating and verifying the security aspects and features of a software system, such as confidentiality, integrity, availability, authentication, authorization, or encryption, by using various tools, techniques, and methods, such as static analysis, dynamic analysis, code review, or fuzzing. Software security testing can help to identify and eliminate the vulnerabilities within a software system, or to mitigate and manage their impact, and thus to improve the security and quality of the software system. Identifying the key security features of the software is not the overall goal of software security testing, but rather a specific objective or a subtask of the process. Ensuring all software functions perform as specified is not the overall goal of software security testing, but rather a general goal of software testing, which is a broader process that covers not only the security aspects, but also the functional, non-functional, performance, usability, and compatibility aspects of a software system. Making software development more agile is not the overall goal of software security testing, but rather a benefit or an outcome of the process, as software security testing can help to integrate the security considerations and practices into the software development life cycle, and to enable faster and more frequent delivery of secure and reliable software products.
An organization that has achieved a Capability Maturity model Integration (CMMI) level of 4 has done which of the following?
Addressed continuous innovative process improvement
Addressed the causes of common process variance
Achieved optimized process performance
Achieved predictable process performance
An organization that has achieved a Capability Maturity Model Integration (CMMI) level of 4 has done the following: achieved predictable process performance. CMMI is a framework that provides a set of best practices and guidelines for improving the capability and maturity of the processes of an organization, such as software development, service delivery, or project management. CMMI consists of five levels, each of which represents a different stage or degree of process improvement, from initial to optimized. The five levels of CMMI are:
An organization that has achieved a CMMI level of 4 has done the following: achieved predictable process performance, meaning that the organization has established quantitative objectives and metrics for the processes, and has used statistical and analytical techniques to monitor and control the variation and performance of the processes, and to ensure that the processes meet the expected or desired outcomes. An organization that has achieved a CMMI level of 4 has not done the following: addressed continuous innovative process improvement, addressed the causes of common process variance, or achieved optimized process performance, as these are the characteristics or achievements of a CMMI level of 5, which is the highest and most mature level of CMMI. References:
Which of the following models uses unique groups contained in unique conflict classes?
Chinese Wall
Bell-LaPadula
Clark-Wilson
Biba
The model that uses unique groups contained in unique conflict classes is the Chinese Wall model. The Chinese Wall model is a type of security model that is designed to prevent the conflict of interest or the leakage of sensitive information in a multi-level and multi-client environment, such as a consulting firm or a law firm. The Chinese Wall model uses unique groups contained in unique conflict classes to represent the different types or categories of information or clients that may have a potential or actual conflict with each other. A unique group is a collection of information or clients that belong to the same type or category, such as the same industry or sector. A unique conflict class is a collection of unique groups that have a conflict with each other, such as the competitors or rivals in the same industry or sector. The Chinese Wall model uses a dynamic and context-based access control mechanism to enforce the security policy and rules based on the unique groups and conflict classes. The access control mechanism allows a subject to access any object that belongs to any unique group, as long as the subject has not accessed any object that belongs to another unique group in the same conflict class. Once the subject has accessed an object that belongs to a unique group, the subject is restricted to access only the objects that belong to the same unique group, and is prohibited to access any object that belongs to another unique group in the same conflict class. The access control mechanism can help to prevent the subject from accessing or disclosing the information or clients that may have a conflict of interest or a competitive advantage with the information or clients that the subject has already accessed or represented. Bell-LaPadula, Clark-Wilson, or Biba are not the models that use unique groups contained in unique conflict classes, as they are either more focused on the confidentiality, integrity, or integrity of the information, rather than the conflict of interest or the leakage of the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Models and Frameworks, page 142; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 3: Security Engineering, Question 3.9, page 135.
Which of the following is a characteristic of the independent testing of a program?
Independent testing increases the likelihood that a test will expose the effect of a hidden feature.
Independent testing decreases the likelihood that a test will expose the effect of a hidden feature.
Independent testing teams help decrease the cost of creating test data and system design specification.
Independent testing teams help identify functional requirements and Service Level Agreements (SLA)
Independent testing is a type of testing that is performed by a third-party or external entity that is not involved in the development or operation of the program. Independent testing has several advantages, such as reducing bias, increasing objectivity, and improving quality. One of the characteristics of independent testing is that it increases the likelihood that a test will expose the effect of a hidden feature. A hidden feature is a functionality or behavior of the program that is not documented or specified, and may be intentional or unintentional. Independent testing can reveal the effect of a hidden feature by using different test cases, techniques, or perspectives than the ones used by the developers or operators of the program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, page 1169; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.17, page 308.
Spyware is BEST described as
data mining for advertising.
a form of cyber-terrorism,
an information gathering technique,
a web-based attack.
Spyware is a type of malicious software that covertly collects and transmits information about the user’s activities, preferences, or behavior, without the user’s knowledge or consent. Spyware is best described as data mining for advertising, as the main purpose of spyware is to gather data that can be used for targeted marketing or advertising campaigns. Spyware can also compromise the security and privacy of the user, as it can expose sensitive or personal data, consume network bandwidth, or degrade system performance. Spyware is not a form of cyber-terrorism, as it does not intend to cause physical harm, violence, or fear. Spyware is not an information gathering technique, as it is not a legitimate or ethical method of obtaining data. Spyware is not a web-based attack, as it does not exploit the vulnerabilities of the web applications or protocols, but rather the vulnerabilities of the user’s system or browser.
Which of the following is the BEST way to protect privileged accounts?
Quarterly user access rights audits
Role-based access control (RBAC)
Written supervisory approval
Multi-factor authentication (MFA)
Privileged accounts are those that have elevated permissions or access to sensitive data or systems. They are often targeted by attackers who want to compromise the network or steal information. The best way to protect privileged accounts is to use multi-factor authentication (MFA), which requires the user to provide two or more pieces of evidence to prove their identity, such as a password, a token, a biometric, or a phone. MFA makes it harder for attackers to gain access to privileged accounts, even if they manage to steal or guess the password. MFA also provides an audit trail of who accessed the account and when. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 281. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), page 581.
Which of the following attack types can be used to compromise the integrity of data during transmission?
Keylogging
Packet sniffing
Synchronization flooding
Session hijacking
Packet sniffing is a type of attack that involves intercepting and analyzing the network traffic that is transmitted between hosts. Packet sniffing can be used to compromise the integrity of data during transmission, as the attacker can modify, delete, or inject packets into the network stream. Packet sniffing can also be used to compromise the confidentiality and availability of data, as the attacker can read, copy, or block packets. Keylogging, synchronization flooding, and session hijacking are all types of attacks, but they do not directly affect the integrity of data during transmission. Keylogging is a type of attack that involves capturing and recording the keystrokes of a user on a device. Synchronization flooding is a type of attack that involves sending a large number of SYN packets to a target host, causing it to exhaust its resources and deny service to legitimate requests. Session hijacking is a type of attack that involves taking over an existing session between a user and a web service, and impersonating the user or the service.
Which of the following is the strongest physical access control?
Biometrics and badge reader
Biometrics, a password, and personal identification number (PIN)
Individual password for each user
Biometrics, a password, and badge reader
The strongest physical access control is the combination of biometrics, a password, and a badge reader. A physical access control is a security mechanism that restricts and regulates the access to a physical location, such as a building, a room, or a device. A physical access control can use different types of authentication factors, such as something the user knows (e.g., a password or a PIN), something the user has (e.g., a badge or a token), or something the user is (e.g., a fingerprint or a face). The combination of biometrics, a password, and a badge reader is the strongest physical access control, as it requires the user to present three different and independent authentication factors, which increases the security and reduces the likelihood of unauthorized access. This type of physical access control is also known as multi-factor authentication or MFA . References: [CISSP CBK, Fifth Edition, Chapter 5, page 431]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 10].