What role should be assigned to a security team member who will be taking ownership of notable events in the incident review dashboard?
The role that should be assigned to a security team member who will be taking ownership of notable events in the incident review dashboard is the ess_analyst role. The ess_analyst role is a predefined role in Splunk Enterprise Security that grants the user the ability to view, edit, comment, and change the status and owner of notable events. The ess_analyst role also allows the user to access the dashboards, reports, and searches related to security analysis and investigation12. References = 1: Overview of roles and capabilities in Splunk Enterprise Security - Splunk Documentation - ess_analyst role. 2: Incident Review - Splunk Documentation - Triage notable events on the Incident Review dashboard.
To which of the following should the ES application be uploaded?
The ES application should be uploaded to the search head, which is the component that runs the ES user interface and executes the searches, alerts, and reports. The search head should be dedicated to ES and not run any other applications. The indexer is the component that indexes the data and stores it in buckets. The KV Store is a feature that stores and manages data as key-value pairs. The dedicated forwarder is a component that collects data from various sources and forwards it to the indexer. None of these components can run the ES application. References =
Which of the following lookup types in Enterprise Security contains information about known hostile IP addresses?
Threat intel is the lookup type in Enterprise Security that contains information about known hostile IP addresses, as well as other indicators of compromise (IOCs) such as domains, URLs, hashes, and email addresses. Threat intel is collected from various sources, such as Splunk Enterprise Security, Splunk Add-on for Enterprise Security, Splunk Enterprise Security Content Update, and third-party threat intelligence providers. Threat intel is used to enrich events and generate notable events when a match is found between an IOC and an event field. You can view and manage the threat intel sources and lookups in Enterprise Security using the Threat Intelligence framework. References =
Which of the following is part of tuning correlation searches for a new ES installation?
Correlation searches can perform adaptive response actions when they find a pattern in the data. Adaptive response actions are automated or manual responses that you can use to modify your environment based on notable events. For example, you can block an IP address, add a user to a watchlist, or send an email notification. Configuring correlation adaptive responses is part of tuning correlation searches for a new ES installation, as it allows you to customize the actions that are triggered by the correlation searches. You can enable, disable, or modify the adaptive response actions for each correlation search, or create your own custom actions. References =
How is it possible to specify an alternate location for accelerated storage?
The tstatsHomePath setting in indexes.conf allows you to specify an alternate location for accelerated storage. Accelerated storage is where Splunk Enterprise stores the summary data for data models that are accelerated. The summary data is used to speed up searches and reports that use the data models. By default, the accelerated storage is located in the same volume as the index that contains the events referenced by the data model. However, you can use the tstatsHomePath setting to change the location of the accelerated storage to a different volume or path. This can help you optimize the performance and disk space usage of your Splunk Enterprise deployment. References =
An administrator is asked to configure an “Nslookup” adaptive response action, so that it appears as a selectable option in the notable event’s action menu when an analyst is working in the Incident Review dashboard. What steps would the administrator take to configure this option?
To configure an “Nslookup” adaptive response action, so that it appears as a selectable option in the notable event’s action menu when an analyst is working in the Incident Review dashboard, the administrator would take the following steps:
Which column in the Asset or Identity list is combined with event security to make a notable event’s urgency?
The priority column in the asset or identity list is combined with the event severity to make a notable event’s urgency in Splunk Enterprise Security. The urgency is a measure of how important it is to address a notable event, and it is calculated based on a matrix that maps the priority of the asset or identity involved in the event and the severity of the event. The urgency can be one of the following values: low, medium, high, or critical12. For example, by default, medium, high, and critical priority, combined with critical severity, will generate a critical urgency ranking3. References = 1: Incident Review - Splunk Documentation - Urgency. 2: Configure notable event urgency - Splunk Documentation. 3: Solved: Splunk Enterprise Security: Is there a way to forc… - Splunk Community.
A set of correlation searches are enabled at a new ES installation, and results are being monitored. One of the correlation searches is generating many notable events which, when evaluated, are determined to be false positives.
What is a solution for this issue?
A correlation search is a scheduled search that runs periodically to detect patterns of interest in the data and generate notable events or other actions when the search conditions are met. A correlation search can generate false positives, which are notable events that do not represent a real security incident or threat. False positives can create noise and reduce the efficiency and accuracy of the security analysis. To reduce false positives from a correlation search, you can modify the correlation schedule and sensitivity for your site. The correlation schedule determines how often the correlation search runs and over what time range. The sensitivity determines the threshold or limit for the search conditions to trigger a notable event. By adjusting the correlation schedule and sensitivity, you can fine-tune the correlation search to match your environment and data sources, and avoid generating notable events for normal or benign activities. You can modify the correlation schedule and sensitivity for a correlation search using the Content Management page in Splunk Enterprise Security. References =
Dealing with Security False Positives in Splunk (Enterprise Security ...2
Upping the Auditing Game for Correlation Searches Within ... - Splunk
A newly built custom dashboard needs to be available to a team of security analysts In ES. How is It possible to Integrate the new dashboard?
According to the Splunk Enterprise Security documentation, the best way to integrate a newly built custom dashboard to a team of security analysts in ES is to set the dashboard permissions to allow access by es_analysts and use the navigation editor to add it to the menu. This will ensure that the dashboard is visible and accessible to the users with the es_analyst role, which is the default role for security analysts in ES. The navigation editor allows you to customize the menu bar of ES and add links to custom dashboards, reports, or other views. See Customize Splunk Enterprise Security dashboards to fit your use case and Customize the navigation bar for more details.
The other options are not recommended, because they either do not integrate the dashboard properly or they create unnecessary complexity. Adding links on the ES home page to the new dashboard is not a good option, because it does not integrate the dashboard into the menu bar and it may clutter the home page. Creating a new role inherited from es_analyst, making the dashboard permissions read-only, and making this dashboard the default view for the new role is not a good option, because it creates a redundant role and it may confuse the users who expect to see the Security Posture dashboard as the default view. Adding the dashboard to a custom add-in app and installing it to ES using the Content Manager is not a good option, because it requires creating and maintaining a separate app and it may cause conflicts or performance issues with ES. Therefore, the correct answer is C. Set the dashboard permissions to allow access by es_analysts and use the navigation editor to add it to the menu. References =
How to Create Custom Dashboards and Alerts to Achi ... - Splunk Community
Enterprise Security’s dashboards primarily pull data from what type of knowledge object?
Data models are the primary source of data for Enterprise Security dashboards. Data models provide a structured and consistent way of defining and retrieving data from indexes. Data models accelerate searches by using prebuilt summaries of the data. Data models also enable the use of the tstats command, which can perform statistical analysis on the data model summaries. Data models are mapped to the Common Information Model (CIM), which provides a common language for describing data across domains and technologies. References =
Which setting is used in indexes.conf to specify alternate locations for accelerated storage?
The setting that is used in indexes.conf to specify alternate locations for accelerated storage is tstatsHomePath. Accelerated storage is the location where Splunk Enterprise stores the summary data for accelerated data models and reports. By default, acceleration storage is allocated in the same location as the index containing the raw events being accelerated. However, if you need to specify alternate locations for your accelerated storage, you can use the tstatsHomePath setting in indexes.conf. This setting allows you to define a different path for the summary data, which can improve the performance and efficiency of the data model acceleration. For example, you can set the tstatsHomePath to a faster disk or a different volume than the index homePath12. References = 1: Managing data models in Enterprise Security - Splunk Lantern - Indexes allow list. 2: indexes.conf - Splunk Documentation - tstatsHomePath.
Which columns in the Assets lookup are used to identify an asset in an event?
The columns in the Assets lookup that are used to identify an asset in an event are ip, mac, dns, and nt_host. These columns contain the network identifiers of the assets, such as IP address, MAC address, DNS name, and NetBIOS name. Splunk Enterprise Security uses these columns to match the asset fields with the event fields, such as src, dest, dvc, host, and hostname. When a match is found, Splunk Enterprise Security enriches the event with the asset information, such as category, priority, business unit, and location. This allows you to search and analyze events based on the asset attributes and context. References =
Asset & Identity for Splunk Enterprise Security - Part 1 ...
An administrator wants to ensure that none of the ES indexed data could be compromised through tampering. What feature would satisfy this requirement?
Data integrity control is a feature of Splunk Enterprise that helps you verify the integrity of data that it indexes. When you enable data integrity control for an index, Splunk Enterprise computes hashes on every slice of data using the SHA-256 algorithm. It then stores those hashes so that you can verify the integrity of your data later. This feature prevents data tampering and ensures that the data is trustworthy and reliable. Therefore, the correct answer is B. Data integrity control. References = Manage data integrity - Splunk Documentation.
Which data model populated the panels on the Risk Analysis dashboard?
The Risk Analysis dashboard uses the Risk data model to populate the panels. The Risk data model is a data model that contains information about the risk scores and risk modifiers of various objects, such as systems, users, hashes, and network artifacts. The Risk data model accelerates these fields for the Risk Analysis and Incident Review dashboards. The Risk data model also handles case insensitive asset and identity correlation, allowing risk modifiers that are applied to system or user name variants to be correctly attributed to the same risk_object1. The other options, B, C, and D, are not correct. The Audit data model contains information about audit events, such as user logins, password changes, and system access. The Domain Analysis data model contains information about the domains that are visited by the systems in the network. The Threat Intelligence data model contains information about the threat intelligence sources, indicators, and matches. References =
What can be exported from ES using the Content Management page?
The Content Management page in Splunk Enterprise Security allows you to export any content type that is listed on the page as an app. The content types include correlation searches, glass tables, dashboards, reports, saved searches, key indicators, workbench panels, and managed lookups. You can use the export option to share custom content with other ES instances, such as migrating customized searches from a development or testing environment into production. You can also import content from other ES instances or from Splunkbase using the Content Management page. References =
When installing Enterprise Security, what should be done after installing the add-ons necessary for normalizing data?
After installing the add-ons necessary for normalizing data, you should configure the add-ons according to their README or documentation. The add-ons that are included in the Splunk Enterprise Security package are preconfigured and do not require additional steps. However, the add-ons that are downloaded separately from Splunkbase may require additional configuration steps, such as enabling inputs, setting up credentials, or modifying props and transforms. You should review the README or documentation for each add-on to determine the specific configuration requirements and follow the instructions accordingly. References =
The Brute Force Access Behavior Detected correlation search is enabled, and is generating many false positives. Assuming the input data has already been validated. How can the correlation search be made less sensitive?
If the number of failed logins is greater than or equal to the threshold value, the search triggers a notable event. To make the search less sensitive, the threshold value can be increased, so that only more frequent failed logins will trigger a notable event. For example, the default threshold value is 4, which means that 4 or more failed logins within a 1-minute window will trigger a notable event. If the threshold value is changed to 10, then only 10 or more failed logins within a 1-minute window will trigger a notable event. References =
How is it possible to navigate to the list of currently-enabled ES correlation searches?
The way to navigate to the list of currently-enabled ES correlation searches is to use the Content Management page in Splunk Enterprise Security. The Content Management page allows you to view, enable, disable, and edit the content items that are included in Splunk Enterprise Security, such as correlation searches, dashboards, reports, and lookups. To access the Content Management page, you need to select Configure > Content > Content Management from the Splunk ES menu bar. Then, you can filter the content items by Type and Status to view only the correlation searches that are enabled. You can also use other filters, such as App, Domain, or Owner, to further refine your view12. References = 1: Content Management - Splunk Documentation - View content items. 2: Content Management - Splunk Documentation - Enable or disable content items.
How should an administrator add a new look up through the ES app?
The correct way to add a new lookup through the ES app is to upload the lookup file using Configure > Content Management > Create New Content > Managed Lookup. This allows the user to create or select an existing lookup file and definition, specify the lookup type, label, and description, and enable editing of the lookup file. This also stores the lookup file at the application level, which makes it easier to edit and share. The other options are either incorrect or not recommended for ES. Uploading the lookup file in Settings > Lookups > Lookup table files does not create a lookup definition or a label and description for the lookup. Uploading the lookup file in Settings > Lookups > Lookup Definitions does not upload the lookup file itself, but only creates a definition for an existing file. Adding the lookup file to /etc/apps/SplunkEnterpriseSecuritySuite/lookups requires manual editing of the file system and is not recommended for ES. References =
Which of the following is a recommended pre-installation step?
According to the Splunk Enterprise Security documentation, one of the recommended pre-installation steps is to configure search head forwarding. Search head forwarding is a feature that allows the search head to forward its internal logs and metrics to an indexer or a heavy forwarder for indexing and analysis. This feature helps you monitor the health and performance of the search head and troubleshoot any issues that may arise. You can configure search head forwarding by editing the outputs.conf file on the search head and specifying the destination indexer or forwarder. See Configure search head forwarding for more details.
The other options are not recommended, because they are either unnecessary or harmful for the installation of ES. Disabling the default search app is not a good option, because it may cause some features of ES to not work properly, such as the Content Management page and the navigation editor. Downloading the latest version of KV Store from MongoDB.com is not a good option, because ES uses the built-in KV Store service that comes with Splunk Enterprise and does not require any external installation or configuration. Installing the latest Python distribution on the search head is not a good option, because it may cause compatibility issues with ES, which uses the Python version that comes with Splunk Enterprise. Therefore, the correct answer is B. Configure search head forwarding. References = Configure search head forwarding.
Which two fields combine to create the Urgency of a notable event?
The urgency of a notable event is a value that indicates how important or urgent the event is for investigation and response. The urgency of a notable event is determined by two fields: the priority and the severity. The priority is a value that is assigned to an asset or an identity based on how critical or valuable it is for the organization. The priority can be unknown, low, medium, high, or critical. The severity is a value that is assigned to a notable event based on how serious or harmful the event is for the security posture. The severity can be unknown, informational, low, medium, high, or critical. The urgency of a notable event is calculated by combining the priority and the severity values using a lookup table called urgency_lookup. The urgency can be informational, low, medium, high, or critical. You can use the urgency field to prioritize the investigation of notable events in Splunk Enterprise Security. References =
After installing Enterprise Security, the distributed configuration management tool can be used to create which app to configure indexers?
After data is ingested, which data management step is essential to ensure raw data can be accelerated by a Data Model and used by ES?
After data is ingested, the data management step that is essential to ensure raw data can be accelerated by a data model and used by ES is normalization to the Splunk Common Information Model (CIM). The CIM is a standard and consistent way of naming and structuring the fields and tags for different types of data, such as network, web, email, authentication, and malware. The CIM allows you to use the same search queries and dashboards across different data sources, even if they have different formats or schemas. Normalizing data to the CIM involves mapping the raw data fields and tags to the CIM fields and tags using technology add-ons. Technology add-ons are Splunk apps that provide the necessary configurations and extractions for specific data sources. By normalizing data to the CIM, you can enable data model acceleration for the data models that use the CIM fields and tags. Data model acceleration is a feature that speeds up searches and reports that use data models by pre-computing and storing the results of the data model queries. Data model acceleration is required for most of the dashboards and correlation searches in Splunk Enterprise Security. References =
What is the default schedule for accelerating ES Datamodels?
According to the Splunk Enterprise Security documentation, the default schedule for accelerating ES data models is every 5 minutes. This means that the data model acceleration searches run every 5 minutes to summarize the newly indexed data and store the results in the tsidx files. The 5-minute schedule is recommended for most use cases, as it provides a balance between search performance and resource consumption. However, you can change the schedule of a data model acceleration search in the Content Management page of Splunk Enterprise Security, if needed. See Configure data models for Splunk Enterprise Security for more details. References = Configure data models for Splunk Enterprise Security.
Which of these Is a benefit of data normalization?
According to the Splunk Enterprise Security documentation, one of the benefits of data normalization is that searches can be built no matter the specific source technology for a normalized data type. Data normalization is a way to ingest and store data in the Splunk platform using a common format for consistency and efficiency. When data is normalized, it follows the same field names and event tags for equivalent events from different sources or vendors. This allows you to perform cross-source analysis and correlation of security events without worrying about the differences in data formats. For example, if you have data from Windows, Linux, and Mac OS systems, you can normalize them using the Endpoint data model and use the same fields, such as , , and , to search for endpoint events across all systems. Therefore, the correct answer is C. Searches can be built no matter the specific source technology for a normalized data type. References =
Onboarding data to Splunk Enterprise Security
Glass tables can display static images and text, the results of ad-hoc searches, and which of the following objects?
Glass tables can display static images and text, the results of ad-hoc searches, and security metrics. Security metrics are visualizations that show the values of KPIs, service health scores, or notable events. You can add security metrics to a glass table by using the Security Metrics menu in the glass table editor. You can also configure the appearance, behavior, and drilldown options of the security metrics. Glass tables cannot display lookup searches, summarized data, or metrics store searches directly, although you can use these types of searches as data sources for ad-hoc searches and then display the results on a glass table. References =