03/29/2026
Automate tasks that help fortify your security posture with Fusion SOAR workflows.
Discover the power of automation with Falcon Fusion SOAR, designed to help your organization respond to threats quickly and effectively. With a range of features and benefits, Falcon Fusion SOAR provides a comprehensive workflow automation solution.
Create workflows to precisely define the automated actions you want performed in response to detections, policies, cloud security findings, and more.
Falcon Fusion SOAR workflows help streamline analyst workflows by automating actions around specific and complex scenarios. You can create workflows to precisely define the actions you want Falcon to perform in response to incidents, detections, policies, cloud security findings, and more.
If you're creating custom roles, here is the permission required to enable, disable, and run these workflows:
csrn:workflow:webhook-trigger START
Workflows can help streamline and automate processes for several Falcon products. For example:
Send out Slack notifications based on cloud container image assessment findings.
Generate ServiceNow incident tickets when high-severity vulnerabilities are detected.
Run real time response (RTR) scripts to manage affected hosts when incidents are reported.
Notify a user and request validation before continuing with an action.
Schedule a workflow to perform an action regularly.
Workflows are built around a trigger-condition-action model. For example, a workflow that automatically generates ServiceNow incidents when critical detections are reported consists of these three components:
Trigger: Falcon reports an endpoint detection.
Condition: Detection has a severity of Critical.
Action: Create a ServiceNow incident.
You can add familiar logic elements to build sophisticated workflows:
Else and Else if statements: Conditional statements allow for multiple workflow branches
Else loops: Conditional loops allow for multiple looping options
AND statements and OR statements: Group multiple conditions within a single workflow branch
Parallel actions, conditions, and loops: Create independent branches of actions, conditions, or loops within a single workflow
Sequential actions and loops: Add actions or loops to execute in a specific order within a workflow branch
Join branches: Combine multiple conditional or parallel workflow branches into one
If you have a Charlotte AI subscription, you can create workflows using natural language prompts. For more info, see Create Fusion SOAR Workflows with Charlotte AI.
Each workflow begins with a trigger.
These are the types of triggers:
Event: The workflow is triggered by an event in the Falcon environment.
Schedule: The workflow is triggered regularly, based on a defined schedule, such as hourly, daily, weekly, or monthly.
Also, you can run this type of workflow immediately through the Falcon console or an API call, as shown in Run a workflow on demand.
On demand: The workflow is triggered directly through the Falcon console, by another workflow, or by an API call, as shown in Run a workflow on demand.
Inbound webhook: The workflow generates a unique webhook URL for the given workflow and is then triggered by an external system that uses that URL. For more info, see Custom triggers based on inbound webhooks and Create and manage triggers based on inbound webhooks.
The event triggers available depend on your subscriptions.
You can look for triggers in the content library as explained in Library of actions, apps, Foundry app templates, playbooks, and triggers or read this overview of the trigger categories.
3PI Data Connection: For more info, see Data Connectors.
Detection: Falcon reports on these detection types:
EPP Detection: For more info, see Endpoint Detection Monitoring.
Identity Detection: For more info, see Identity-based Incidents, Detections, and Risks.
Mobile detection: For more info, see Configuring Falcon for Mobile.
Next-Gen SIEM Detection: For more info, see Detection Monitoring.
Next-Gen SIEM Case: For more info, see Cases.
Third Party Detection: For more info, see Third-party detections.
Audit event: Trigger a workflow on any of these changes:
Policy (Prevention, Firewall, Sensor Update, Device Control, Response, Mobile, Identity Protection, and Airlock policies)
Deleted
Created
Enabled
Disabled
Updated
Host
Host unhidden
Host containment lifted
Host hidden
Request to lift containment
Containment requested
Host contained
Detection
Tag
Comment
Assignment
Status
Cloud security assessment: Falcon reports a new cloud security finding
Custom IOA monitor: Falcon reports a custom IOA detection. For more info, see Monitor custom IOA detections and preventions.
Removable storage, USB, or Bluetooth:
connected events
blocked events
policy violation events
Files written to removable storage
Note: Bluetooth events are only supported for macOS.
For more info, see Device Control.FileVantage change: Falcon reports a file integrity change. For more info, see Falcon FileVantage.
Kubernetes and containers: Falcon reports a security finding related to your Kubernetes environment or containers in your cloud environment. This includes vulnerabilities and detections identified by the Image Assessment tool, container runtime detections, container drift detections, and detections triggered by Image Assessment prevention policies. For more info, see Container Security.
Vulnerabilities user action: A vulnerability management user selects Create ticket for a vulnerability, host, or remediation. For more info, see Vulnerability Management Ticketing Workflows.
Workflow execution: Trigger a workflow off of a workflow. This option is useful if you want to set up notifications to know when workflows run or hit a failure point.
Message Center: Falcon Message Center has a new case, update, or comment. For more info, see Message Center.
Fusion SOAR provides many triggers to start your workflows. However, you can also create custom triggers that use inbound webhooks to integrate third-party tools, enabling real-time orchestration from those tools. For example, you can define triggers for events from external sources such as SIEMs, SOAR platforms, ticketing systems, and threat intelligence feeds.
For info about requirements for these triggers related to subscriptions, CrowdStrike clouds, and roles, see Requirements.
For info about how to manage these actions, see Create and manage triggers based on inbound webhooks.
Be aware of these limitations:
A workflow can have only one webhook trigger.
The webhook URLs are auto-generated and cannot be reused or edited.
The payload must be raw JSON. Form-data, file uploads, and so on are not supported.
There are no retry attempts for failed requests.
The maximum payload size is 1 MB.
A condition consists of a parameter, an operator, and a value. A workflow compares an observed parameter to the value based on the operator. For example, a workflow has a condition that evaluates endpoint detection severity:
Parameter: Severity
Operator: Is greater than or equal to
Value: High
If a detection has a severity of High or Critical, the condition is true and the workflow action delivers a Slack notification. For medium-severity detections, you can create another condition that sends emails instead.
To set up conditions based on day or time, use time-based parameters. You can always create a time-based condition using the Workflow execution time parameter. Additional time-based parameters are available depending on the trigger or action. For example, with triggers, you can use these parameters: Behavior timestamp, Last behavior, Start Time, End Time, and Last Activity. With actions, examples include these action fields: VirusTotal Last Analysis Date and VirusTotal Creation Date. These parameters are only some of the available time-based parameters. The includes operator is the only operator available with time-based parameters. The workflow runs only on the days you specify. By default, the condition applies to the whole day. To specify a period of less than a day, choose times using the 12-hour clock format with AM or PM. To specify additional time periods, add an ELSE IF condition for each new time period.
You can use multiple workflow conditions joined with AND operators or OR operators to create complex logic in your workflows.
For example, this sample workflow sends a notification by email when an endpoint detection is high severity and associated with a sensor platform of Windows. Both conditions must evaluate to true for the workflow to send the email.
For a more complex example, this sample workflow sends a notification by email when any one of several detections or preventions take place in Falcon Prevent: a detection occurs, a prevention kills a process, a file is quarantined, a combined action of killing a process and quarantining a file, or a process would have been killed but a policy setting was not enabled. You could create individual workflows or ELSE IF operators for each of these items, but using an OR operator can be simpler for your analysts to understand at a glance.
| Logical expression | Workflow representation | Description |
|---|---|---|
(hosttag == "critical system") OR (Severity == critical) |
(Parameter: Hosttag Operator: equals Value: OR (Parameter: Severity Operator: equals Value: |
If the first condition evaluates to true OR the second condition evaluates to true, then the condition block evaluates to true, and the workflow proceeds along the THEN branch. If neither condition evaluates to true, then the condition block evaluates to false, and the workflow proceeds from the ELSE branch, if any. |
(Severity >= high AND hostgroup == server) OR Severity == critical |
(Parameter: Severity Operator: greater than or equal to Value: AND Parameter: Hostgroup Operator: equals Value: OR (Parameter: severity Operator: equals Value: |
If both severity and hostgroup conditions evaluate to true, OR if the second severity condition evaluates to true, then the workflow proceeds along the THEN branch. |
|
|
(Parameter: Hosttag Operator: equal to Value: OR (Parameter: Severity Operator: equals to Value: OR (Parameter: Severity Operator: greater than or equal to Value: AND Parameter: Hostgroup Operator: equals to Value: ) |
If any of the three conditions evaluates to true, then the workflow proceeds along the THEN branch. The third condition combines two conditions with an AND operator, so both must be true for the combined condition to evaluate to true. |
|
|
(Parameter: Username Operator: equal to Value: AND (Parameter: FilePath Operator: matches Value: OR (Parameter: Username Operator: equal to Value: AND (Parameter: ParentProcessFilePath Operator: matches Value: |
Combine two expressions with OR to evaluate expressions like A AND (B OR C). If either expression evaluates to true, the workflow proceeds along the THEN branch. |
Conditions include several operators that vary depending on the selected parameter:
| Data types | Description | Example |
|---|---|---|
|
True when the observed parameter is exactly the same as the value you provide |
To only process endpoint detections where an IOC Type is an MD5 hash, configure the condition with these settings: Parameter: IOC Type Operator: is equal to Value: |
| Data types | Description | Example |
|---|---|---|
|
True when the observed parameter is not the same as the value you provide |
To only execute a workflow path for endpoint detections not occurring on Microsoft Windows hosts, apply these settings: Parameter: Platform Operator: is not equal to Value: |
| Data types | Description | Example |
|---|---|---|
|
Data with a logical ordering:
|
True when the observed parameter is larger than the value you provide–or follows it for ordinal categories. Equivalent to the mathematical relation “>” as in “5 > 3” |
To process detections having high or critical severity, apply these settings: Parameter: Severity Operator: is greater than Value: |
| Data types | Description | Example |
|---|---|---|
|
Data with a logical ordering:
|
True when the observed parameter is smaller than the value you provide–or comes before it for ordinal categories. Equivalent to the mathematical relation “<” as in “2 < 3” |
To only process automated leads having a score smaller than 50, apply these settings: Parameter: Confidence score Operator: is less than Value: |
| Data types | Description | Example |
|---|---|---|
|
Data with a logical ordering:
|
True when the observed parameter is larger than or the same as the value you provide–or includes or follows it for ordinal categories |
To process all detections with a severity of medium or higher, apply these settings: Parameter: Severity Operator: is greater than or equal to Value: |
| Data types | Description | Example |
|---|---|---|
|
Data with a logical ordering:
|
True when the observed parameter is smaller than or the same as the value you provide--or includes or comes before it for ordinal categories |
To only process automated leads having a score of 50 or smaller, apply these settings: Parameter: Confidence score Operator: is less than or equal to Value: |
| Data types | Description | Example |
|---|---|---|
|
True when the observed parameter exists in the workflow events |
To only process identity protection detections with a source endpoint object GUID, apply these settings: Parameter: Source endpoint object GUID Operator: exists |
| Data types | Description | Example |
|---|---|---|
|
True when the observed parameter does not exist in the workflow events |
To only process endpoint detections where the grandparent process command line does not exist, apply these settings: Parameter: Grand parent process command line Operator: does not exist |
| Data types | Description | Example |
|---|---|---|
Note: Each item in the Value field should have an X symbol after it, as shown in the Example column. Click the X to remove the item.
|
True when the observed parameter equals one of the items in the list you provide If the parameter is a string, its value must equal one of the items in the list you provide. If the parameter is an array, one of the values in the array must equal one of the items in the list you provide. In cases where the parameter value will not be exactly equal to one of the items in the list you provide, use the matches operator. See matches. |
To only process endpoint detections where the IOC type is either File name or Registry key Parameter: IOC Type Operator: includes Value: |
| Data types | Description | Example |
|---|---|---|
Note: Each item in the Value field should have an X symbol after it, as shown in the Example column for includes. Click the X to remove the item.
|
True when the observed parameter does not equal any of the items in the list you provide If the parameter is a string, its value must not equal any of the items in the list you provide. If the parameter is an array, no value in the array can equal any of the items in the list you provide. In cases where you want to compare the parameter value to only part of a string, use the does not match operator. See does not match. |
To only process endpoint detections where the behavior objective is neither Gain access nor Keep access, apply these settings: Parameter: Behavior objective Operator: does not include Value: |
| Data types | Description | Example |
|---|---|---|
|
Strings (case-sensitive) |
True when the observed parameter matches your wildcard filter pattern
Note: This pattern is
not a regular expression. The asterisk is the only supported wildcard
and represents any text. To form the pattern, combine a string with an
asterisk before it, after it, or both.
The wildcard filter pattern can have one of these formats:
You cannot insert asterisks in the middle of the string. To match the start and end of a value, create a condition to match the start; then click And to create another one to match the end. |
To only process endpoint detections where the command line starts with Parameter: Command Line Operator: matches Value: To only process the workflow branch for detections when the domain of the host ends with Parameter: Domain Operator: matches Value: |
| Data types | Description | Example |
|---|---|---|
|
Strings (case-sensitive) |
True when the observed parameter does not match your wildcard filter pattern
Note: This pattern is
not a regular expression. The asterisk is the only supported wildcard
and represents any text. To form the pattern, combine a string with an
asterisk before it, after it, or both.
The wildcard filter pattern can have one of these formats:
You cannot insert asterisks in the middle of the string. To match the start and end of a value, create a condition to match the start; then click And to create another one to match the end. |
To process a workflow branch only for endpoint detections whose File Path parameter does not include the directory Parameter: File path Operator: does not match Value: |
In Fusion SOAR conditions, you can use operators such as is equal to and exists to examine a parameter and possibly a literal, or static, value. In addition, you can use functions to transform data and write more expressive conditions. For example, you can determine whether an IP address is v4 or v6 with these expressions:
| Operation | Expression | Result |
| Check whether the address is IPv4 | cs.ip.isV4('4.4.4.4') |
True |
| Check whether the address is IPv6 | cs.ip.isV6('2001:0db8:85a3:0000:0000:8a2e:0370:7334') |
True |
size(data[source.ips]) > 0
To use these expressions, when creating or editing a condition, click Advanced mode.
With this feature, you can also compare variables. For example, you can compare an IP address from event search results with one retrieved for a particular host.
You can also use the Charlotte AI Data Transformation Agent to help you create complex CEL expressions without needing to learn the syntax. Use the expression builder to tell Charlotte AI what you want to accomplish with your data. Charlotte AI interprets your request and creates the appropriate CEL expression to meet your needs.
For more info about the functions you can use in expressions, see Fusion SOAR Data Transformation Functions.
The following example shows a condition to check for a Tactic of Post-Exploit:
data["Trigger.Detection.MitreAttack"] != null && data["Trigger.Detection.MitreAttack"].exists(x, x.Tactic == 'Post-Exploit')
timestamp
function and the observed time for a trigger to check whether a date
occurs Monday-Friday, between 8 AM and 5 PM in the Mountain Time zone:
data['Trigger.ObservedTime'] != null &&
timestamp(data['Trigger.ObservedTime']).getDayOfWeek("America/Denver") >= 1 &&
timestamp(data['Trigger.ObservedTime']).getDayOfWeek("America/Denver") <= 5 &&
timestamp(data['Trigger.ObservedTime']).getHours("America/Denver") >= 8 &&
timestamp(data['Trigger.ObservedTime']).getHours("America/Denver") <= 17
In addition to supporting the same notification channels as Falcon Notifications, Fusion SOAR workflows support an expanded collection of potential actions.
${data['Workflow.Definition.Description']} variable is greater than 100 characters, then the workflow returns an error on execution.
Find actions to add to your workflows by searching or browsing the action panel. Alternatively, use the content library discussed in Library of actions, apps, Foundry app templates, playbooks, and triggers.
For more info, see these topics:
You can find the actions when searching in several ways:
By the action name
By words in the action descriptions
When you search, the most relevant results are shown in the Top results section. After that, all of the results are shown in lists.
Before and after searching, you can refine the search by filtering. To see the options, click Filter. These are the options:
Hide unavailable actions
Show all the actions or just the ones that make sense in the current context by enabling or disabling Hide unavailable actions. Actions might be unavailable because of a missing prerequisite, plugin, or permissions. Showing all actions allows you to see the possibilities if you are able to resolve the requirements.
Group by
Group the actions by vendor to show only the actions relevant to the given vendor.
Group actions by use case, such as Endpoint security or Identity & Access, which shows all the actions relevant to the given use case.
Vendor
Select specific vendors to show only the actions relevant to those vendors.
After you find an action, complete these steps to add the action to your workflow:
Click the action.
Optional. Check what input the action requires by clicking View schema and then Input schema.
For more info about JSON schema, see Manage action input, action output, and on-demand triggers.
Check the output so you know what to expect from the command by clicking Output schema.
Configure the action.
Click Next.
If you create any actions—such as actions based on event queries or in Foundry apps, these actions go in the group named Other (Custom, Foundry, etc.).
You can use many of the actions you find in the console without additional info. However, some of them have usage details that aren’t obvious. This list provides more info about those actions:
Enrichment: Incorporate device or third-party vendor data for enriched context
Get customer details: This action is available only in parent CIDs in Flight Control environments.
Real time response: Execute Real Time Response commands
Execute commands aligned with your RTR role:
Get file (get)
Retrieves host files.
Kill process (kill)
Takes a process ID as input and stops the process from running.
A common subsequent RTR command workflow action is Remove file.
Process memory dump (memdump)
Available with Windows only.
Requires a condition that specifies the sensor platform is equal to Windows. For more info, see Workflow conditions.
Takes a process ID as input.
Output is the local file path of the memory dump file saved on the host.
Common subsequent RTR command workflow actions include:
Get file then Remove file
Kill process then Remove file
Put file (put)
Available with Windows, Mac, and Linux only
Requires a condition that specifies the sensor platform is equal to Windows, Mac, or Linux. For more info, see Workflow conditions.
Takes an executable file name from the list of “PUT” files uploaded to Host setup and management > Response and containment > Response scripts and files.
A destination path that specifies where to download the file must be provided.
Put and run file (put-and-run)
Available with Windows and Mac only
Requires a condition that specifies the sensor platform is equal to Windows or Mac. For more info, see Workflow conditions.
Takes an executable file name from the list of “PUT” files uploaded to Host setup and management > Response scripts and files.
Supports optional command line parameters that are passed when the file runs.
Remove file (rm)
C:\windows\system32.
Attempts to do so produce a failed workflow action and this error: Access to the path is deniedTo improve workflow efficacy:
Set up other workflow actions to run in parallel as the rm action.
Avoid setting up critical actions after an rm action. If the rm action fails, subsequent actions are skipped.
Retrieve active network connections (netstat)
Retrieve running processes (ps)
Run file (run)
Takes the file provided in the trigger event as input. For more info, see Workflow triggers.
Supports optional command line parameters that are passed when the file runs.
Execute Falcon scripts.
Select the script name, and then provide arguments as required.
Available with Windows only.
Requires a condition that specifies the sensor platform is equal to Windows. See Workflow conditions.
Only users with the RTR Administrator role can add Falcon script actions to workflows.
Execute RTR custom scripts. For more info, see Managing custom response scripts.
Identity Protection: For example use cases for the following Identity Protection workflow actions, see Identity Protection in Fusion SOAR.
Endpoints must be in an Active Directory domain monitored by Identity Protection. CrowdStrike recommends adding a condition that filters out non-Windows platforms before adding this action to a workflow. For more info, see Workflow conditions.
Event search: Run an event search to gather data for actions or triggers.
For example, you could find users logged in during the last day. Check each user against the Identity Protection watchlist. Then, for the users on the watchlist, send the users email to notify them that their systems will be contained, and then contain the systems. Alternatively, when a detection occurs, you could get the process associated with the detection and find all network connections associated with that process. Then add this info to a report and distribute it in an email.
After the event search action runs, the results are available to use in an action or condition. The results vary based on the search action. To get the results when you are defining an action that accepts a file, such as the Create Jira issue action, in the Data to include list, find the name of the event search action. To get the results when you are defining a condition, in the Parameter list, find the name of the event search action. For example, for the condition shown below, under the name of the event search action, Retrieve executed processes, the only option to access the results is Unformatted results. Another you might see is Full results download URL.
Identity Protection Watchlist Update after Credential Access Detection: Automatically adds users and hosts to the Falcon Identity Protection watchlist in response to identity-based incidents that use MITRE ATT&CK® Credential Access tactics.
ServiceNow Ticket Creation for High Severity Detections: Creates ServiceNow ticket for high severity detections with recent executed processes and command line history attached to the ticket.
Call webhook: Send an HTTP request to a webhook URL
The Call webhook action supports customizable input through its Custom JSON field. This support provides flexibility in exporting and managing data by allowing you to define a mapping to tailor data to the downstream tool processing the webhook.
When you add a Call webhook action, you have these Data format options for the data to export:
Default
Displays the Data to include list that allows you to select multiple items to use.
Custom JSON
Displays a field where you enter your JSON to map Falcon data to the desired custom JSON format. To help you map data to the desired fields, click Workflow data and then copy and paste variables.
For example, you can create a simple mapping of data to the desired property names.
As another example, you can map data by category. Here we have two categories: One for detection data and one for workflow data.
Sleep: Wait a specified time before proceeding to the next action
Darkfeed
Falcon for Mobile built for Microsoft Intune
IPQS Threat & Risk Scoring
Iris Threat Intelligence
Rubrik Security Cloud
SecurityScorecard Cyber Risk Ratings
VirusTotal
To help you find Fusion SOAR workflow options that you can use to set up or modify a workflow, visit the library at Fusion SOAR > Fusion SOAR > Content library .
The content library provides a single location where you can find actions, apps, Foundry app templates, playbooks, and triggers. You can then explore what's possible, learn how to use content more effectively, or more quickly find what you need to define a workflow for your particular scenario. For more info, see Workflow actions and Fusion SOAR Playbooks.
To find items that most interest you, you have several options:
List, sort, and browse
Search
You can search for a whole phrase or just the beginning of a phrase.
Filter in several ways:
By vendor, so you can focus on items that work with the selected vendor
By use case, such as Application security or Exposure management
By multiple use cases at the same time
On the Apps tab, you see these items for the apps:
Names and descriptions of the apps
Their use cases
When they were last updated
Their status: Installed or –, meaning not installed
When you click an app, you see these items:
The app description
Actions in the app
To see descriptions plus input and output schemas, if any, for an action, expand its listing
For more info about schemas, see Manage action input, action output, and on-demand triggers.
Configuration instructions for the app
A Configure app button you can click to set up the app or modify its configuration
On the App Templates tab, you see these items for the Foundry app templates:
Names and descriptions of the app templates
Their use cases
When they were last updated
The number of actions they include
When you click an app template, you see these items:
A Deploy in Foundry button that takes to you to Foundry to deploy the app template
On the Playbooks tab, you see these items for the playbooks:
Names and descriptions of the playbooks
Their use cases
When they were last updated
The number of actions they include
When you click a playbook, you see these items:
The playbook description
The playbook definition
The trigger and actions used in the playbook
An Open in Fusion SOAR button you can click to open the playbook and customize it in Fusion SOAR.
On the Actions tab, you see these items for the actions:
Names and descriptions of the actions
Their use cases
When they were last updated
When you click an action, you see these items:
The action description
The input schema and output schema, if any, for the action
For more info about schemas, see Manage action input, action output, and on-demand triggers.
On the Triggers tab, you see these items for the triggers:
Their use cases
When they were last updated
When you click a trigger, you see these items:
The trigger description
The output schema, if any, for the trigger
For more info about schemas, see Manage action input, action output, and on-demand triggers.
You can manage variables that you define using the Create variable and Update variable actions.
Also, Fusion SOAR provides many variables you can use to include info, such as Observed event time and Trigger source IP, in notifications and elsewhere.
Whenever you can insert a variable in a field, the Workflow data panel appears. This panel shows the available variables. To insert a variable, click a desired variable in the panel to copy it, and then paste it in the panel where you're working.
You can transform variables in a field using the functions mentioned in Fusion SOAR Data Transformation Functions.
Workflows can loop through items found in a list of data in the output of certain triggers and actions. For each item, the workflow can make a decision or take an action based on that item. The lists depend on the trigger or action. For example, the Detection > EPP Detection trigger output includes a list of Host groups. For an example of an action, under the Threat Graph type, the Get devices associated with a sha256 hash action produces a list of hosts.
Here are some possible uses for loops:
Polling for results
The loop polls an API and then uses the Sleep action to wait for a response. When the API returns a specific response or status code, the Loop break action stops the loop. The workflow continues to its next component.
Paginating API responses
For APIs that support pagination, use a While loop to make requests until a response contains the desired data. Using this technique, you avoid pulling all the data at once. The While loop would contain a nested loop to check the data for a certain value. When that value is found, a Loop break action in the nested loop breaks out of both loops.
Making loop data available to the rest of a workflow
For a loop with sequential iterations, you can preserve data from the loop for use after the loop. The output is an array of objects where each object corresponds to a loop iteration. You can then use the entire array or its components in other parts of the workflow.
Using a variable within a loop
With the Create variable action, define a variable that you then update within the loop using the Update variable action.
For a loop with sequential iterations, this variable is available outside the loop and has the value set in the last loop iteration.
Loop types
For each: Iterate over a defined dataset concurrently or sequentially until the dataset is used–or a Loop break action occurs.
While: Iterate sequentially over loop-scoped data until a specified condition is met or a Loop break action occurs.
With this loop type, you also have the option to end a loop based on the number of iterations and time in the loop.
Create a custom variable that you set to the value of the desired parameter before the loop. Use the Create variable action.
Use that custom variable as the parameter in the While condition.
Update the value of the custom variable inside the loop to match the current value of the desired parameter. Use the Update variable action.
Processing orders
At the same time / concurrently
Available only to For each loops.
Runs iterations in batches of 500 with the iterations run at the same time. Batches run one at a time. The batch size is subject to change.
Concurrent loops do not output data that you can directly use in subsequent actions or loops, however you can use the Write to log repo action to retain the output.
Any custom variable updated in a concurrent loop doesn't retain the update after the loop iteration
One after the other / sequentially
Available to For each loops and While loops.
Runs each iteration in order.
Loops end when all iterations are complete or a Loop break action occurs.
While loops can also end based on a condition, the number of iterations, and time in the loop.
Sequential loops output data arrays that you can use in subsequent actions or loops.
Any custom variable updated in a sequential loop retains the update made in the last loop iteration
Nested loops
A loop within a loop is a nested loop. Nesting only goes one level: Nested loops can’t contain loops.
A loop can have one or more loops nested in it. With multiple nested loops, the loops must be on different branches and loop on different items.
Related actions
Loop break
Use this action after a condition to possibly end the loop.
When you use this action within a nested loop, specify whether to end the parent loop or the grandparent loop.
Create variable
Use this action to create a variable.
Update variable
Use this action to update a variable inside a loop.
Output availability
You can make the output of actions in a sequential loop available to use later in the workflow.
The End Loop item identifies the loop's output.
The output is an array of objects, where each object corresponds to an iteration of the loop and each field corresponds to a chosen loop output key. You can then use the entire array or its separate fields in other parts of the workflow.
In a loop with multiple actions, the output from each action is only available to subsequent actions in that loop. For example, the second action can use the output of the first action. However, the first action can’t use the output of the second action.
The output of an action not in a loop is available to any actions that come after it on the same branch.
Default iteration limits
To prevent indefinite loops when you use a While loop or a For each loop with sequential processing, loops have a default limit of running for 50 iterations or 60 minutes–whichever comes first. To adjust these values, select the Limit iterations option and enter your desired values.
A loop can run, or iterate, up to 100,000 times. If a loop attempts to iterate more than 100,000 times, both the loop and its workflow go into an error state. Such a workflow is not available to retry. See Retry a failed execution. To continue a workflow even when a loop fails, use the Continue workflow on loop iteration failure option when you create loops. See Loop iteration failure.
With nested loops, both the outer loop and the nested loop can iterate up to 100,000 times each.
In concurrent loops, loop iterations are processed in batches, one batch at a time. As a result, larger loops can take hours to complete.
In both concurrent and sequential loops, the amount of time for a workflow to complete depends on the number of actions in each loop and, if those actions aren’t completed quickly enough, the timeouts for those actions.
To reduce a workflow’s execution time, decrease the amount of work in each loop.
Consider loops A, B, and C.
Loops A and C are parallel:
They run at the same time.
There is no dependency between the loops. If loop A fails, loop C still runs.
The output of any actions in loop A is not available in loop C.
The output of any actions in loop C is not available in loop A.
Loops A and B are sequential:
Loop A must finish before loop B can start.
If loop A fails, loop B might not start—depending on the Continue workflow on loop iteration failure option. For more info, see Loop iteration failure.
The output of any actions in loop A is not available in loop B.
The possibilities for looping are numerous. Here are examples to give you some ideas about those possibilities.
In this example, the workflow looks for hashes that are considered malicious. It then gets devices that have that hash. The loop iterates through the list of devices and contains those devices. Next, the workflow assigns the detection to a user, adds a comment to the detection, and sends an email that indicates how many devices were contained.
Here’s what that workflow looks like on the canvas.
To create this workflow, the high-level steps are as follows.
Create a trigger for an Detection > EPP detection.
Add the VirusTotal File Hash Lookup action for the Executable SHA256 found in the detection.
Create a condition to check whether the VirusTotal file Malicious Count exceeds the threshold, which is 10 in this example.
Add the Get devices associated with a sha256 hash action and the Executable SHA256 hash.
From that action, create a loop with an input type of SHA256 executions.
In that loop, add the Contain device action with Device ID set to Host ID.
After the loop, add the Assign detection to user action and specify the user.
Add the Assign comment to detection action and specify a comment to indicate that a workflow took action.
Add the Send email action.
This example uses nested loops. The outer loop iterates through devices getting processes associated with an indicator of compromise, or IOC. For each device, the inner loop iterates through that list of processes to kill the process, remove the file, and add the host to the Falcon Identity Protection watchlist. Next, the workflow sends an email that indicates how many devices were added to the watchlist.
For this workflow, the high-level steps are as follows.
Set up a Detection > EPP Detection trigger.
Create a condition using Advanced mode with the following CEL expression, which checks for a detections with a severity of high and a tactic of post exploit:
data['Trigger.Detection.Severity'] == 4 &&
data["Trigger.Detection.MitreAttack"] != null &&
data["Trigger.Detection.MitreAttack"].exists(x, x.Tactic ==
'Post-Exploit')
Add this action: Get devices associated with a sha256 hash with SHA256 hash set to Executable SHA 256.
From that action, create a loop with a Loop type of For each and a Loop source of SHA256 executions.
In that loop, add this action: Get processes associated with an ioc with Vertex id set to Module vertex instance.
Still in that loop, create another loop with a Loop type of For each and a Loop source of Cloud process IDs.
In the inner loop, complete these steps:
Add this action: Get process details with Cloud process ID set to the Cloud process IDs instance.
Create a condition to check whether Process terminated is False.
Add this action: Kill process with Device ID set to Host ID instance and Process ID set to OS Process ID.
Add this action: Remove file with Device ID set to Host ID instance and File Path set to File path.
After the inner loop, add this action: Add endpoint to watchlist with Endpoint ID set to Host ID instance.
After both loops, add this action: Send email to indicate a workflow took action and to include data such as Device count and any other necessary info.
This example creates a variable and then uses a For Each loop to check whether any users who have Administrator privileges have changed their passwords. If any of these users have changed their passwords, they are added to the variable. After the loop finishes, the variable is used in the body of an email. The email is empty if no users with Administrator privileges have changed their passwords.
For this loop, the high-level steps are as follows.
Create a scheduled trigger that runs hourly.
Add the Create variable action. We'll use this variable to collect a list of users with Administrator privileges who have changed their passwords.
Users. Here's the JSON schema that defines the variable:
{
"type": "object",
"properties": {
"Users": {
"items": {
"type": "string"
},
"type": "array"
}
}
}
Add the Find Active Directory password updates action.
Add a loop with these settings:
Loop type: For each
Loop source: Search results
Processing order: One after the other / sequentially
Inside the loop, complete these steps:
Add a condition to check if the Account Object SID instance exists.
If it does exist, add a condition to check if user privileges includes Administrator.
Add the Update variable action.
If the user does have Administrator privileges, we update our variable to include the user.
In Variable, select the array variable we created: Users. In Value, include Users and User name.
After the loop, add the Gmail - Send Email action.
Include the Users variable in the email. The Workflow data panel shows all the available variables. In that panel, after Custom variable, the Users variable appears. Click the variable to copy it. Place your cursor in Message Body and paste the variable.
In concurrent loops, a loop tries to run for each item of input, up to a maximum of 100,000 iterations. If any of those iterations does not complete successfully, the workflow behavior depends on the Continue workflow on loop iteration failure option. This option is available to select only when you define a concurrent loop.
| Continue workflow on loop iteration failure option | Description |
|---|---|
|
Selected |
Even if a loop iteration fails or a loop has zero iterations because of a lack of input, the workflow continues to execute any nodes after the loop itself. If any iterations fail, the entire execution shows as failed even if actions after the loop succeeded. Select the option to treat the loop executions as "best effort." |
|
Deselected |
The workflow does not proceed any further. Don’t select this option in either of these cases:
|
Data transformation is available for actions, conditions, loops, loop output, and workflow output.
You can use data transformation functions in any field that has an associated Expressions and data transformations icon. To use data transformation functions, click Expressions and data transformations
to open the expression builder. Using the builder, you can choose a
view that helps you create your expression or a view that lets you code
the expression directly.
You can also use the expression builder to describe your data transformation goals to the Charlotte AI Data Transformation Agent in everyday language. For more info, see Conditions in Advanced mode using data transformation functions
For info about the related functions, see Fusion SOAR Data Transformation Functions.
You can create workflow actions that query various Falcon data sources. The available data sources depend on your Falcon subscriptions.
This feature improves security efficiency and reduces response time by eliminating manual enrichment and correlation tasks.
By creating workflow actions that run queries, security engineers can perform activities such as these:
Query data in a repository from a Fusion SOAR workflow to eliminate a manual, redundant task done by an analyst or take an action based on the output of the query
Create a scheduled workflow that completes a proactive threat hunt by running a series of event queries and then sends an alert if suspicious event data is found
When you create an action based on an event query, you select whether the query is specific to the workflow or to the CID:
Workflow-specific:
If a query is workflow-specific, the action that uses the query is available only to that workflow. The action does not show up as an option when adding an action to other workflows.
To use the exact same action later within the same workflow, you must define the action from scratch again.
You can export the workflow and import it in other CIDs if nothing else prevents the import of the workflow.
CID-specific:
If a query is CID-specific, the action that uses the query is available to all the workflows on the CID.
You can't import the workflow in other CIDs.
For any actions created with event queries before July 24, 2025, the event queries are CID-specific.
After you create a query, you cannot change whether it is workflow-specific or CID-specific.
For info about requirements for event queries related to subscriptions, CrowdStrike clouds, and roles, see Requirements.
For info about how to manage these actions, see Create and manage actions based on event queries.
Be aware of these limitations:
You can only query repositories that CrowdStrike provides. Querying data sources you host is not possible.
While the email action available in Fusion SOAR does allow you to append query results to the message body, the results are limited to 5000 characters and in JSON.
However, you can attach a file that is up to 10 MB in size.
Search actions are available only in the CIDs where they were created.
By default, query results are limited to 200 rows.
You can increase this limit to 10,000 rows by specifying a tail in the query. To specify a tail, pipe the query to tail(x):
| tail(x)
where 200 < x <= 10000.
Similarly, you can pipe a query to head(x).
The query result supports only primitive types–that is, types that represent a single value.
For example, {key: foo} is supported, but {key: [foo, bar]} is not.
As you prepare to create and refine workflows, there are important considerations for each action type you want a workflow to perform.
Review how endpoint detections have been assigned to your users, tagged, and moved through statuses to identify what actions you might want to automate.
Review how your organization has remediated incidents and detections in the past to establish which kinds of actions you might want to automate on specific kinds of detections.
It’s essential that you craft workflows that involve remediation with specific conditions to avoid taking action on a large number of hosts.
Consider creating or updating RTR custom scripts. Custom scripts with the Share with workflows toggle enabled are available as RTR actions in workflows. For more info, see Managing custom response scripts.
Ensure your Response Policies, set using Host setup and management > Response and containment > Response policies, are configured to allow the actions you’re expecting from your workflows.
You can enable additional security by going to Support and resources > Resources and tools > General settings . In the Security menu, select Re-authentication for critical actions, and then enable these settings:
Real Time Response (RTR) identity verification
Before enabling a Fusion SOAR workflow with any RTR action
With these settings enabled, if a workflow has an RTR action, when you enable that workflow or run it on demand, Fusion SOAR prompts you to verify your identity using multifactor authentication, or MFA.
This action uses RTR sessions:
On-Premises HTTP Request
To use this action, configure a response policy with Real Time Response (RTR) enabled and assign it to the host group.
Make sure the response policy has these settings enabled:
Custom Scripts
get
put
run
put-and-run (Windows and macOS)
For more info, see Configuring response policies and Assigning a response policy to host groups.
If you have any workflows with an Audit event > RTR Session trigger, this action will trigger those workflows. However, you can set a condition in these workflows to ignore the RTR sessions created by this action. To set such a condition, set its fields as indicated here:
Set Parameter to Connected From
Set Operator to is not equal to
Set Value to On-premises API call
Then click Add condition line and set the fields as indicated here:
Set Parameter to User
Set Operator to is not equal to
Set Value to [email protected]
See Set up VirusTotal integration.
Think about what notifications you want to set up and review your existing Falcon notifications, noting potential crossover that might create duplicate notifications for your users.
Carefully consider what people in your organization need to know about and the best way to reach them. Be precise about what triggers notifications and who receives them so you don’t fire off too many for your users to handle.
Set up plugin integrations for notification channels, if needed:
Jira
Microsoft Teams
PagerDuty
ServiceNow
Slack
Webhook
You can trigger workflows for unmanaged assets, unsupported assets, and new external assets using an asset management trigger.
Triage unmanaged and unsupported assets. These are assets that don't have the Falcon sensor installed.
Asset actions aren't available in CrowdStrike government clouds.
You can test and debug your Fusion SOAR workflows throughout your workflow creation process. Whether you are testing an action or the entire workflow, it is automatically saved as a draft before a test is executed.
| Test | Purpose | Approach |
|---|---|---|
| Test data transformations | Ensure accuracy, reliability, and data integrity. As transformation logic grows more complex, test to capture errors early. | Evaluate and iterate on transform logic with mock data or user-defined data. |
| Unit test | Reduce debugging complexity by validating integrations (third-party http requests are working) and data flow (output from one action correctly flows into the next). | Confirm the workflow can complete defined business logic from start to finish without errors. Test data flow by modifying trigger mock data to test all. |
| End-to-end (E2E) |
Confirm the workflow can complete a task from start to finish without errors. Detect issues that unit tests miss, particularly in data flow. |
When testing end-to-end, you have two options for the data:
Note: On demand and scheduled workflows can’t be tested live.
|
When editing an action, you can test it without running the entire workflow. This is known as a unit test.
Be aware that error messages vary based on the action being tested. To avoid repeated failures, review the expected input versus the values you’ve entered on the Configure tab.
ngsiemCaseID in the Case ID field, the action fails and returns an error, Case was not found. Update the value to a valid case ID and try again.Now that you’ve reviewed your input data, you are ready to unit test.
If a workflow execution fails, use the node-level execution history to debug the workflow and understand why a specific action failed or a condition didn't evaluate.
Contain Device action.
To find your Fusion SOAR workflows, go to Fusion SOAR > Fusion SOAR > Workflows .
For the purposes of debugging, the Total executions panel is a filtered view of the Execution log displaying executions from the last 30 days or 10,000 triggers. If you need to see executions up to 90 days, go to the Execution log. For more info, see View the execution log.
To see and manage workflows, go to Fusion SOAR > Fusion SOAR > Workflows .
Click Open menu for any workflow to access options to view, edit, and more.
Through the process, you choose a trigger, add conditions to refine the trigger, and define actions to be performed when the exact trigger conditions are met. As you work through creating a workflow, the Workflow preview shows you where you are in the process and the attributes of the workflow you’re creating.
Workflows can be straightforward and sequential, or can accommodate specific scenarios and needs by adding Else If conditions, Else actions, or loops to create branches.
The Fusion SOAR workflow builder consists of the workflow canvas and various panels.
The canvas is a visual representation of the workflow functionality.
The panels let you define individual elements in your workflow.
If you are creating or editing a workflow, you can navigate the workflow canvas several ways.
To move the workflow within the canvas, click and hold a spot in the canvas, and then drag it.
To zoom in or out on a particular spot in a workflow, hover over that spot and use your mouse's scroll wheel.
To edit an item, click it and, depending on the item, click Edit or Back .
To see options to delete an item or to add more items immediately after it, hover over the item.
Also, you can select items in the workflow by using the Tab key.
When tabbing, press Enter to act on the selected item. Within a workflow, pressing Enter could show details for the item and allow you to edit it, delete the item, or present items you can add after the current item. Consider these scenarios:
If pressing Tab selects an action item, pressing Enter displays the panel where you can edit the action.
If pressing Tab selects , pressing Enter deletes the item.
If pressing Tab selects ,
, or
at the bottom of an item, pressing Enter shows the items you could add at that point.
Outside a workflow, tabbing provides access to the menu bar, where you also press Enter to act on the selected item.
Click Create workflow.
For Flight Control environments, users with the Falcon Administrator role in the parent CID can choose which child CIDs the workflow applies to. Those workflows can trigger for any child CID, even if the users don’t have permissions in the child CIDs. However, users with the Workflow Author role in a parent CID can only create workflows that trigger for that parent CID: Those users aren’t able to create workflows for that CID’s child CIDs.
When applying a workflow to child CIDs, you have these options:
| Option | Description |
|---|---|
|
All child CIDs |
Applies the workflow to all current and future child CIDs. However, you can explicitly exclude some CIDs. |
|
Specific CIDs |
Applies the workflow only to the CIDs you specify. |
|
Only current CID |
Applies the workflow only to the current CID. |
If you define a workflow in a parent CID, the definition of the workflow, its execution log, and its audit log are only shown in the parent CID and not any of its child CIDs.
Select whether to create a workflow from scratch, by using a playbook, or by importing a workflow, and then click Next.
From scratch: Define all of the settings of the workflow yourself.
Playbooks: To help you create your workflow, several playbooks are available. They serve as templates that you modify to quickly set up workflows in your environment. Playbooks simplify and automate common use cases and demonstrate workflow possibilities. Each playbook includes its own setup steps. For more info, see Fusion SOAR Playbooks.
Import: If you have exported a workflow, you can import its definition file.
For more info, see Export or import a workflow.
If you import a workflow, skip the rest of these steps and follow the steps in Import a workflow instead.
If creating a workflow from scratch, find an event-based trigger or select a trigger type:
Event: The workflow is triggered by an event in the Falcon environment.
To find an event-based trigger, use the search field or browse the lists of use cases. Either way, you can filter the triggers shown and adjust the sort.
For info about event triggers, see Workflow triggers.
Scheduled workflow: The workflow is triggered regularly, based on a defined schedule, such as hourly, daily, weekly, or monthly. These options are available to specify the details of the workflow schedule:
How often: Schedule a workflow to run hourly, daily, weekly, or monthly.
Start time: Specify a time for the workflow to start. For example, if you specify a daily workflow, you can set it to start at 2 AM each day.
Time zone: Specify the time zone that should be used for running the scheduled workflow. The workflow settings, such as start time, adhere to this time zone, not the time zone of a Falcon console user or an asset like an affected host.
On these days: For weekly or monthly scheduled workflows, select one or more days of the week or month to run the workflow. For example, you can specify a monthly workflow to run on the 1st and 15th days of the month or you can specify a weekly workflow to run on Tuesday and Thursday of each week.
Add start or end date: Optionally, specify a date to start or end the schedule for the workflow runs. If specified, the workflow runs according to the specified schedule only during the selected timeframe.
Skip if a previous execution is still in progress: Select if you want to skip an iteration of the scheduled workflow if the previous iteration is still in progress. For example, if you have an hourly scheduled workflow, you might not want to start a new iteration if the previous hour’s workflow is still in progress.
If you prefer not to require any actions or fields when a user runs the workflow, go to the next step.
Specify the required items for the workflow to run by defining a JSON schema to specify the mandatory actions and fields.
To provide and convert JSON into JSON schema, click Generate schema and paste in the sample JSON.
To create the schema one property at a time, click Add property .
{
"type": "object",
"properties": {
"deviceID": {
"type": "string",
"format": "aid"
}
}
}
You could use that input in a workflow to contain a device and create a Jira ticket to investigate the situation.
For more info about JSON schema, see Manage action input, action output, and on-demand triggers.
Inbound webhook: The workflow generates a unique webhook URL for the given workflow and is then triggered by an external system that uses that URL. For more info, see Custom triggers based on inbound webhooks and Create and manage triggers based on inbound webhooks.
Click Next.
Your selected trigger appears on the workflow canvas.
To change the trigger selection, click the trigger element on the canvas and click Back to return to trigger selection.
Undo and Redo buttons are available.
In the Add next panel or on the workflow canvas, select Condition .
In the panel, make your selections to define the condition and click Next.
For info about conditions, see Workflow conditions.
*) at the start, end, or both of the value entered.
Refine the trigger further by adding more conditions. Click Next when all conditions have been added.
In the example, an additional condition has been added to show how multiple conditions are grouped to visually indicate that all must be met for the workflow to be triggered.
In the Add next panel or on the workflow canvas, select Action to add the action to be performed when this condition is met.
Select and define an action.
For an introduction to actions, see Workflow actions.
Review the workflow in the panel and canvas. With a trigger, conditions, and actions defined, this is a complete workflow. If the Issues panel opens, you have warnings or errors to resolve.
Click Save and exit.
Click Save and exit.
Add sequential or parallel elements to your workflows by adding them to the canvas and defining them.
Hover over the trigger and click to expand the menu.
Select an option:
Sequential action: Add an action to the current workflow branch
Sequential loop: Add a loop to the current workflow branch
Parallel action: Create a branch with an action
Parallel condition: Create a branch with a condition
Parallel loop: Create a branch with a loop
Define and save your elements.
ELSE, IF conditions and ELSE actions let you create separate workflow branches for different conditions.
On the workflow canvas, hover over the condition and click to expand the menu.
Select an option:
Else If condition: Creates a condition branch that is checked if the condition you branch from is not met
For info about conditions, see Workflow conditions.
Else action: Creates an action that is performed if the condition you branch from is not met
Else loop: Creates a loop that is performed if the condition you branch from is not met
In the details panel, define and save your elements.
Add loops to your workflows by adding them to the canvas and defining them in the details panel. Loops can be sequential or parallel. You can also nest loops. Add loops after triggers, conditions, and actions. The items available to loop through depend on the trigger, condition, or action.
For more info, see Looping in workflows.
Hover over the trigger, condition, or action where you want to add a loop and then, to expand the menu, click ,
, or
.
Select an option:
For info about the Continue workflow on loop iteration failure option, see Loop iteration failure.
Loop: Create a While loop or a For each loop to iterate through items
Parallel loop: Create a branch with a loop to run at the same time
Sequential loop: Add a loop to execute in a specific order in the current workflow branch
To nest a loop inside another loop, hover over the beginning of the loop where this new loop should be nested and click
Define the loops and finish defining the workflow.
When creating or modifying a workflow, you can copy a single node such as an action or condition and paste it in an appropriate location on the canvas. Copying a loop is similar. For more info, see Copy and paste a loop.
Copy and paste an action or condition:
Paste is not available if pasting is not allowed in the selected location. For example, you can't paste an action node on a condition's Else If branch.
By default, the name of an action node is incremented each time it is pasted, such as Send email - 2 and Send email - 3. Click each node and click Edit to give them more meaningful names when viewing the canvas.
Copying a loop is similar to copying an action or condition. However, when copying a loop, these limitations apply:
Only the parent node, made up of Start Loop and End Loop, is copied. No child nodes within the loop are copied.
An End Loop can’t be copied individually.
Copy a loop:
Paste is not available if pasting is not allowed in the selected location. For example, you can only nest 2 levels of loops. If pasting a node would create a third level, Paste does not display.
The new parent node displays on the canvas but there are no child nodes.By default, the name of the node is incremented each time it is pasted, such as Loopname - 2 and Loopname - 3. Click each parent node and click Edit to give them more meaningful names when viewing the canvas.
Combine multiple parallel or conditional branches into a single branch in the workflow canvas.
To join branches, click on a branch node and drag a path to the target branch. Click and hold to see available join locations in your workflow.
To delete a join node, click the node, click , and then click Delete to confirm.
Instead of building workflows from scratch, you can duplicate and modify existing workflows.
For info about workflow elements and Fusion SOAR, see Create a workflow.
Click Open menu︙ for the workflow you want to duplicate.
You can also click Open menu︙ for the workflow you want to duplicate on the Execution log tab or in the Execution details panel of the workflow.
Click Duplicate workflow.
The duplicated workflow with “Copy of” prefixed to the Workflow name opens in Fusion SOAR.
Update and rename the duplicated workflow.
For info about how to edit workflow elements in Fusion SOAR, see Edit a workflow element.
Click Save and exit to save the updated workflow.
You can export and import Fusion SOAR workflow definition files. With this ability, you have these options:
Manage your workflows using source control
Share workflows between CIDs more easily
To export a workflow definition:
Start an export from either of these locations:
The All workflows tab
Click Open menu for the workflow you want to export.
Click Export workflow.
Click the name of the workflow you want to export.
Click Open menu and then select Export workflow.
Enter a name and location for the workflow definition file.
Save the file.
The workflow definition is saved to a YAML file.
If the workflow contains email addresses, some of the addresses are changed to [email protected].
Customize the addresses for use with a given CID when you import the
workflow through the Falcon console. Otherwise, if you plan to use the
API to import the workflow, edit the YAML file now to use addresses with
email domains that are valid for the CID where you plan to import the
workflow. For more info about the API, see Fusion SOAR Workflow APIs.
If the workflow is triggered based on a schedule that includes an end date, edit the YAML file to remove the line that contains end_date. Otherwise, importing the file can cause an error if the date is in the past.
You can only import workflow definitions when these requirements are met:
An import might not be successful based on the availability of external dependencies in a given CID environment.
To import a workflow definition:
Click Create workflow.
For Flight Control environments, users with the Falcon Administrator role in the parent CID can choose which child CIDs the workflow applies to. Those workflows can trigger for any child CID, even if the users don’t have permissions in the child CIDs. However, users with the Workflow Author role in a parent CID can only create workflows that trigger for that parent CID: Those users aren’t able to create workflows for that CID’s child CIDs.
When applying a workflow to child CIDs, you have these options:
| Option | Description |
|---|---|
|
All child CIDs |
Applies the workflow to all current and future child CIDs. However, you can explicitly exclude some CIDs. |
|
Specific CIDs |
Applies the workflow only to the CIDs you specify. |
|
Only current CID |
Applies the workflow only to the current CID. |
If you define a workflow in a parent CID, the definition of the workflow, its execution log, and its audit log are only shown in the parent CID and not any of its child CIDs.
Click Import workflow and then click Next.
Click Upload workflow file, and then browse to and select the workflow definition file to use.
Click Import workflow.
Enter a unique name for the new workflow.
Click Import workflow.
If the import is successful, a page appears that allows you to customize the workflow.
Click Customize workflow.
If no changes are needed, click Continue.
If the workflow has actions that need configuration, you are stepped through each item that you must configure.
When you complete the required steps, click Continue.
If the workflow includes a trigger based on an inbound webhook, enter the authentication information.
Click Save and exit.
The Fusion SOAR Issues panel shows warnings and errors for invalid workflows. Whether creating or editing a workflow, you can save your workflow even if you have warnings or errors.
Changing the trigger type
Replacing an action node with a different action node creates an error that requires undefined properties to be removed or replaced
Deleting an action node when the outputs are referenced as inputs in downstream nodes results in undefined properties
You can edit elements while creating or editing a workflow.
In general though, complete these steps:
Select an element on the canvas.
A panel opens so you can edit the element.
Make updates in the panel and click Next.
To exit the editing view for an element, click Cancel.
Click Save and exit to save your inactive workflow.
You can delete elements while creating or editing a workflow.
In general though, complete these steps:
Hover over an element on the canvas to delete.
Click Delete .
Depending on how a workflow is set up, you might be able to run the workflow on demand. These workflows are available for running on demand:
On demand workflows
These workflows use an On demand trigger. For more info, see Workflow triggers.
Scheduled workflows
These workflows use a Schedule trigger. For more info, see Workflow triggers.
You can run scheduled workflows on demand instead of waiting for the next scheduled run.
If you’re using these types of workflows in Flight Control environments, be aware of these behaviors:
As with other workflows, you can choose which child CIDs an on-demand workflow applies to. The action to run the on-demand workflow is available only in the parent CID. Other workflows in the parent CID can use that action and run the action against all the children CIDs. However, the action is not directly available to workflows in the child CID.
If an on-demand workflow has been applied to several child CIDs, whenever you click Execute workflow, you must select exactly one CID where the workflow will run.
Fusion SOAR provides several ways you can run a workflow on demand now or later, as explained in these sections:
Go to Fusion SOAR > Fusion SOAR > Workflows .
You can immediately run workflows that meet all of these requirements:
The Trigger column contains Scheduled or On demand
The Last modified by column contains an email address
The Status column shows On
To run such a workflow immediately, click Open menu and select Execute workflow.
For on-demand workflows that require input, you are prompted to provide that input. For example, this prompt requests a deviceID.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Execution log tab.
You can immediately run workflows that meet all of these requirements:
The Trigger column contains Scheduled or On demand
The Last modified by column contains an email address
To run such a workflow immediately, click Open menu and select Execute workflow.
For on-demand workflows that require input, you are prompted to provide that input. For example, this prompt requests a deviceID.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Execution log tab.
You can immediately run workflows that meet all of these requirements:
The Trigger column contains Scheduled or On demand
The Last modified by column contains an email address
To run such a workflow immediately, click View execution details , click Open menu , and select Execute workflow.
For on-demand workflows that require input, you are prompted to provide that input. For example, this prompt requests a deviceID.
Go to Fusion SOAR > Fusion SOAR > Workflows .
You can immediately run workflows that meet all of these requirements:
The Trigger column contains Scheduled or On demand
The Last modified by column contains an email address
The Status column shows On
To run such a workflow immediately, click the workflow name, click Open menu and select Execute now.
For on-demand workflows that require input, you are prompted to provide that input. For example, this prompt requests a deviceID.
When you create a workflow that includes an on-demand trigger, Fusion SOAR automatically creates an action based on that workflow. To start that workflow from another workflow, add the action based on the workflow in the other workflow.
For example, assume whenever you contain a device, you must also create a Jira ticket to follow up. You could create an on-demand workflow that does both the containment and the Jira ticket creation. Other workflows could then trigger the on-demand workflow rather than do the containment directly.
Call an on-demand workflow from a workflow:
Verify that the on-demand workflow that the action is based on is enabled.
For an action based on a workflow to be available, its workflow must be enabled.
Click Create workflow or click the name of workflow where you want to add the action.
If you’re editing a workflow, click Edit.
In the workflow canvas, hover over the
trigger, condition, or action where you want to add the action and then,
to expand the menu, click ,
, or
.
Add the action for the on-demand workflow.
Define or edit the rest of the workflow as needed and save it.
If a workflow uses a Schedule trigger or an On demand trigger, in addition to running the workflow on demand from the Falcon console, you can use the /workflows/entities/execute API endpoint.
For more info about this API, see Fusion SOAR Workflow APIs.
Execute a Falcon Fusion SOAR on-demand workflow in the case workbench, in the On-demand workflows section of an applicable entity’s summary panel. For more info, see Case Investigation.
The Add trigger panel provides an Inbound Webhook option. When you use this option, Fusion SOAR generates a unique webhook URL for the given workflow. External systems can then use that URL to send JSON payloads and trigger the workflow automatically.
For more info, see Custom triggers based on inbound webhooks.
For info about requirements for these triggers related to subscriptions, CrowdStrike clouds, and roles, see Requirements.
Click Create workflow, select how to create the workflow, and continue to the workflow canvas.
In the Add trigger panel, click Inbound webhook.
Enter a name and optionally a description.
Select the HTTP method.
Select the authentication type.
Basic: Enter a username and password.
HMAC: The hash algorithm is automatically
selected. Enter a secret, select the signature encoding, and enter the
signature header name, which is the header in the request, such as X-Signature, that will contain the signature.
The message digest is used in forming the signature. The request body is always included in this digest. You can also include Timestamp and Message ID in the digest.
When you enable HMAC authentication, for each request from a tool outside of Fusion SOAR, you must use the same hash algorithm, secret, and encoding to compute the signature that you place in the specified header.
API key: Enter a key and select whether the key will be found in the requests' header or body.
Complete the fields for the chosen authentication type.
Optional. If you'd like to set a particular response body or response code, click Add advanced configuration and enter the desired info.
By default, the response body is empty and the response code is 200.
Optional. To improve the security of the
trigger, configure a range of IP addresses that are allowed to invoke
the webhook. Click Add advanced configuration and then Allowed IPs. Enter each value as a CIDR notation IP address with a prefix length, such as 192.0.2.0/24 or 2001:db8::/32, as defined in RFC 4632 and RFC 4291.
12.13.14.15/12 to 12.0.0.0/12.
Click Generate URL.
Fusion SOAR creates the URL that you'll use in an external system to trigger the current workflow.
The generated webhook endpoint URL is specific to the workflow where its trigger was created. You can't use the URL with other workflows.
Copy the URL to immediately add to the external system or to save for later.
Optional. Create a JSON schema for the request payload to enforce a format for the payload.
You can listen for a live payload, enter a sample payload, or a combination of both.
To view a history of webhook events and their output to use in creating the schema, click Execution history . You can filter the history for execution status and timeframe.
{},
as the sample payload. When you use this technique, the workflow
execution log shows the entire webhook payload. Also, if you use this
technique, you can use data transformation functions to extract or
transform the webhook payload. For more info, see Fusion SOAR Data Transformation Functions.
Click Next to add the trigger.
Define the rest of the workflow.
You can change the trigger's name, description, authentication configuration, advanced configuration, and payload schema.
For info about the authentication options, see Create a trigger based on an inbound webhook.
Note: If you want a different webhook URL with the same trigger definition, create a new workflow using the desired trigger definition and generate a new URL.
Find and click the workflow.
Click Edit.
Click the trigger.
Make the desired changes.
Click Next.
If you have already created a trigger based on an inbound webhook and want to see the URL again to copy it:
Find and click the workflow.
Click the trigger.
A panel opens.
In the panel, locate the Webhook URL entry.
To inspect issues with a webhook, you have these options:
Webhook event history
Edit a workflow, click the trigger, and then click Execution history .
Execution log
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Execution log tab.
Events in LogScale
Go to Investigate > Search > Advanced event search and run this query to get all of the webhook ingestion errors:
#repo = fusion | "#event_simpleName" = FusionWorkflowEvent | "webhook_ingest_log_type" = IngestError
Webhook Ingest Errors dashboard
For more info, see Investigate webhook ingest errors.
Possible issues and their causes:
Authentication: The request failed authentication.
Common causes:
Missing or incorrect API key
Invalid Basic Auth credentials
Invalid HMAC signature because of the wrong secret, the encoding, or the message content
Missing required headers, such as X-Signature, X-Timestamp, or X-Message-ID
Log subtype: Authentication
ParsingPayload: The request body could not be parsed.
Common causes:
Malformed JSON, possibly a syntax error
Incorrect Content-Type header; should be application/json
Empty or improperly encoded body
Log subtype: ParsingPayload
Internal: An unexpected error occurred during webhook processing.
Common causes:
Server-side exception or infrastructure issue
Workflow misconfiguration
Fusion service failure
Log subtype: Internal
Schema Binding Failure: Data available in a condition or an action does not match the schema.
Common causes:
Missing required fields, such as project.id
Incorrect field names or data types
Mismatched object or array structures
Replay Protection Failure (HMAC): The request was rejected because of a replay detection.
Common causes:
Expired or future-dated timestamp
Time sync issues between sender and Fusion
Missing Required Headers: A required header for validation was not included.
Common causes:
Missing signature header, X-Signature
Missing timestamp or message ID when HMAC replay protection is enabled
Unsupported HTTP Method: The request used an invalid HTTP method.
Common causes:
Sending GET instead of POST
Endpoint expecting JSON but receiving form data
Rate Limiting: The system rejected the request because of the excessive frequency.
Common causes:
Too many requests sent in a short period, which is rare in typical use
Misconfigured Webhook Trigger: The webhook trigger is not properly set up.
Common causes:
Missing or invalid webhook URL
Trigger was deleted or is inactive
Workflow is not enabled
To define the input and output of various actions, use JSON schema to indicate the expected formats. Similarly, for on-demand workflows that require input, use JSON schema to prompt for that input.
The schema is a JSON structure formatted according to draft 7 of the JSON Schema standard. For more info, see JSON Schema.
To chain actions together and build better conditions, you must understand the input and output of the actions you’re using. By viewing a schema, you see which fields are required for its action, the properties for those fields, and their descriptions.
If you're creating a workflow, click Create workflow, select how to create the workflow, continue to the workflow canvas, and add an action.
If you’re editing a workflow, find and click an existing workflow that has actions you want to see schemas. Click Edit and select the action with the schema you want to see.
Click View schema.
The schema viewer opens, showing a form-based version of the schema.
To view the raw JSON, click View JSON Schema.
The form-based schema builder shows the fields and their types. To see a field’s properties, click the field. If a field is an array, click View JSON Schema to see its properties.
This image shows the properties for the parentID field.
Each field in a schema can have numerous properties. Edit properties using the form-based schema builder or directly in the JSON. Use properties to provide context for a field, restrict inputs, and more.
Here are some of the basic properties you might encounter as you work with JSON Schema. For more info about the JSON Schema standard, see Introduction to JSON Schema.
| Property as seen in the JSON viewer | Corresponding property | Description |
| Name | name |
Names the field. |
| Type | type |
Defines the data type that a field must use. Each data type has specific properties.
Here are other properties you might want to use with the
|
| Constant value | const |
Restricts a property to a single value. Applicable across all data types. |
| Default | default |
Sets the default value of the field. Fusion SOAR uses this value unless the user provides a different value. Applicable across all data types. |
| Enum | enum |
Specifies a list of values a property can have. For example, this list validates detection severities: {
"enum":["low","medium","high","critical"]
}
A value of |
| Description | description |
Indicates the purpose of the field and provides context to the end user when they are viewing the schema. Applicable across all data types. |
| Format | format |
Specifies the format the data field must use. For more info, see Format options, including usage for CrowdStrike data types. |
| Maximum length | maxLength |
Sets the maximum length of the string that can be entered. Applicable to string types only. |
| Minimum length | minLength |
Sets the minimum length of the string that can be entered. Applicable to string types only. |
| Maximum value | maximum |
Sets the maximum value for an integer. Applicable to integer and number types only. |
| Minimum value | minimum |
Sets the minimum value for an integer. Applicable to integer and number types only. |
| Pattern | pattern |
Specifies a regular expression pattern to ensure that a value matches a specific format, such as an email address or phone number. Applicable to string types only. |
| Title | title |
Specifies the display label shown in forms that appear in the Falcon console. |
| Required | required |
Indicates which child fields in an object are required. The
|
| Examples | examples |
Defines reference values |
| Properties | properties |
Specifies and validates the child fields of an object type. In the key-value pairs in the object, each key names a field and the corresponding value defines JSON schema to govern that field. |
| Items | items |
Specifies and validates the child fields of an array type. In the key-value pairs in the array, each key names a field and the corresponding value defines JSON subschema to govern that field. |
The format field supports these commonly used values. Not all of these values are in the JSON schema specification.
| Format value | Description |
date |
Date |
date-time |
Date time |
domain |
Domain |
email |
|
ipv4 |
IP version 4 |
ipv6 |
IP version 6 |
md5 |
MD5 hash |
sha256 |
SHA256 hash |
time |
Time |
uuid |
UUID |
url |
URL |
CrowdStrike also uses the standard format
field to indicate the type of data a field represents, which is
critical for some workflow actions. For example, to contain a device you
need its sensor ID because the contain action requires a sensor ID as
input. A sensor ID field is "type":"string", but you also need "format":"aid"
in the JSON. As you build a workflow, you can only select the contain
action if the data you are working with has a type with format "aid". So to build an on-demand workflow that takes a sensor ID and contains a device, the input JSON schema must set the format to "aid".
The format field supports these values that are specific to CrowdStrike to indicate types of data.
| Format value | Description |
aid |
Sensor ID |
deviceTag |
Device Tag |
hostGroupID |
Host group ID |
hostGroupName |
Host group name |
hostname |
Host name |
investigatableID |
Detection ID |
localfilepath |
Local file path |
oktaUserID |
Okta user ID |
platform |
OS platform |
responseUserID |
User name |
rtrFileName |
RTR put file name |
While the form-based schema builder simplifies many aspects of editing a JSON schema, it does not support arrays of scalars. To define or edit an array of scalars, you must edit the JSON directly. However, you can use the schema builder with arrays of objects.
Here is an example of an array of sensor IDs.
{
"type": "object",
"items": {
"type": "object",
"properties": {
"devices": {
"description": "A list of sensor IDs",
"type": "array",
"title": "Sensor IDs",
"items": {
"format": "aid",
"type": "string",
"pattern": "^[A-Fa-f0-9]{32}$"
}
}
},
"required": [
"devices"
]
}
}
Here's an example of JSON that complies with this JSON schema:
{
"devices": ["0302188e5dc9490e861af9c40bc23c15", "94e8963899f74ffdb330065738faa35a"]
}
However, this example JSON does not comply with the schema. Neither ID is the required 32 characters, plus the second ID contains special characters.
{
"devices": ["bc23c15", "%7!4(ffdb33006a35a)"]
}
To set a field to populate with valid options, use the x-cs-pivot
property. This property specifies a set of data to look up as someone,
for example, configures a field while using a workflow or adds an action
based on an event query to a workflow.
To get various types of data, here are entity values you can use with x-cs-pivot:
| Entity value | Description |
devices.groups.name |
Host group names |
devices.hostname.raw |
Host names in the CID |
devices.platform_name |
Platforms used by the CID's hosts |
patterns.technique |
MITRE techniques |
patterns.tactic |
MITRE tactic names |
users.email |
Falcon user emails |
users.id |
Falcon user IDs |
devices.tags |
Device tags |
By setting up a populated list in the trigger for an on-demand workflow, you can simplify creation of the workflow and running the workflow.
This schema looks up host groups for the current CID by using x-cs-pivot with the devices.groups.name entity value:
{
"type": "object",
"properties": {
"hostGroup": {
"type": "string",
"description": "A Crowdstrike host group name",
"format": "hostGroupName",
"title": "Host Group",
"x-cs-pivot": {
"entity": "devices.groups.name"
}
}
},
"required": [
"hostGroup"
]
}
Assume you set up an on-demand workflow and include that JSON in the schema for the On demand trigger, as shown here.
Then, you can add several conditions to the workflow, creating different conditions by setting Parameter to Host Group and selecting different host groups from the populated list.
In addition, whenever someone runs the workflow, they are prompted to provide the required input, which they then select from the list.
?hosts to allow user input for the host and the input schema has x-cs-pivot
defined as in the example, the workflow author can choose a host name
from a dropdown list when adding the action based on the event query to a
workflow. The list shows all the hosts in the CID. For more info about
event queries, see Event queries, or saved searches, as workflow actions.
{
"type": "object",
"$schema": "https://json-schema.org/draft-07/schema",
"required": [
"hosts"
],
"properties": {
"hosts": {
"type": "string",
"title": "Hosts",
"x-cs-pivot": {
"entity": "devices.hostname.raw"
}
}
},
"description": "Generated request schema"
}
If a user must enter any of the required fields, use anyOf in your input JSON schema.
In this example focusing on the anyOf portion, if the user enters any of the required fields, the entry is accepted:
"anyOf": [ { "required": [ "HostNames" ] }, { "required": [ "tags" ] }, { "required": [ "HostGroups" ] }, { "required": [ "aids" ]
In the context of the full input schema, the anyOf portion is at the end of the schema:
{
"properties": {
"HostGroups": {
"items": {
"properties": {
"HostGroup": {
"type": "string",
"title": "Host group",
"format": "hostGroupName",
"x-cs-pivot": {
"entity": "devices.groups.name"
}
}
},
"type": "object"
},
"type": "array"
},
"HostNames": {
"items": {
"properties": {
"Hostname": {
"type": "string",
"title": "Host name",
"format": "hostname",
"x-cs-pivot": {
"entity": "devices.hostname.raw"
}
}
},
"type": "object"
},
"type": "array"
},
"aids": {
"items": {
"properties": {
"aid": {
"pattern": "^[A-Fa-f0-9]{32}$",
"type": "string",
"title": "Inputted Agent Id",
"format": "aid"
}
},
"type": "object"
},
"type": "array",
"title": "Agent Ids"
},
"tags": {
"items": {
"properties": {
"tag": {
"type": "string",
"title": "Grouping tag",
"x-cs-pivot": {
"entity": "devices.tags"
}
}
},
"type": "object"
},
"type": "array"
}
},
"type": "object",
"anyOf": [
{
"required": [
"HostNames"
]
},
{
"required": [
"tags"
]
},
{
"required": [
"HostGroups"
]
},
{
"required": [
"aids"
]
}
]
}
When you create a workflow that includes an on-demand trigger, Fusion SOAR automatically creates an action based on that workflow. You can then use that action in other workflows to start the original workflow. These dependencies can affect how you edit and delete the original workflow.
Find the workflow to edit and click Open menu for that workflow.
Select Edit workflow.
If any other workflows use the action based on the on-demand workflow, a list of those workflows appears.
Choose one option:
If you’re not going to edit the trigger, click Proceed.
If you are going to edit the trigger, in the list of workflows, click duplicate.
The workflow is duplicated and shown in the current browser tab. Edit the duplicate workflow.
Complete your workflow edits and save the changes.
If the action based on the on-demand workflow is used in any other workflows, you must remove that action from the other workflows before you can delete the on-demand workflow.
Find the workflow to delete and click Open menu for that workflow.
Select Delete workflow.
If any other workflows use the action based on the on-demand workflow, a list of those workflows appears.
Remove the action from one of those workflows:
Click a workflow name in the list.
The workflow opens in a new tab.
Click Edit.
Hover over the action to delete and click Delete .
A warning appears.
Click Delete.
Click Save and exit.
Add a comment to describe the change to the workflow for its version history.
Click Update workflow.
The All workflows tab opens again. You can close this browser tab.
Go back to the original browser tab and click Go back.
The All workflows tab opens again.
Continue to remove the action from other workflows by repeating steps 2 and 3.
When the action is no longer in any workflow, finding the on-demand workflow and clicking Delete workflow deletes the on-demand workflow.
For more info about event queries, including example use cases and limitations, see Event queries, or saved searches, as workflow actions.
For info about related subscriptions, roles, and data sources, see Requirements.
For more info about working with these actions, see these topics:
Click Create workflow, select how to create the workflow, and continue to the workflow canvas--or click the workflow where you want to add the action.
If you're editing a workflow, click Edit.
Hover over the trigger, condition, or action where you want to add an action and then, to expand the menu, click ,
, or
.
Select Action .
In the Add action panel, click Create event query.
Select Workflow-specific query or CID-specific query.
For info, see Event queries, or saved searches, as workflow actions.
Click Next.
The query builder appears.
Configure the query.
All: All event data in Falcon, Forensics, IT Automation, and XDR
Falcon: Endpoint event data and sensor events
Forensics: Triage data collected by Falcon Forensics
IT Automation: Event data generated by Falcon for IT
Third Party: Event data generated by integrated third parties
Enter a name for the query in Query name.
Optional. Enter a description for the query to help you and others later understand the purpose of the query.
Optional. To provide test data to run the query against, click Upload test data.
If you’re creating an event query action for data that doesn’t yet exist, you can upload test data in a JSON file–up to 20,000 records–to verify the query and create the action.
One pair of brackets around one or more pairs of braces:
[
{ },
{ },
{ }
]
One or more pairs of braces, separated by new lines:
{ }
{ }
{ }
Whenever you upload data, click Run to refresh query results.
Enter your query and run it, modifying it and running it until you get the desired results.
For info about the query language, see Get Started with CrowdStrike Query Language.
@ and #,
from the event query output. If you have characters removed, create a
new field in the query using the assignment operator. For example: myTime := @timestamp. Then, reference the new field so that the schema uses the new field name.
table() function. For more info, see table().
Click Continue.
A new window opens with these tabs: Query, Input schema, and Output schema.
Fusion SOAR generates the schemas based on the query and its results when the Automatically generate schema option is selected.
For more info about these schemas, see Customize an event query action’s input and output.
Optional. To adjust either of the schemas to better define the requirements you want for the query’s input or output, click Input schema or Output schema and then . When done, click Save changes.
Note: For the input schema, you can only edit the metadata. You can’t edit the schema.
Click Add to workflow.
By default, actions based on CID-specific event queries go in the Other (Custom, Foundry, etc.) group. Later, if you want to add the action to another workflow, search for the action by name or browse for it in this group.
Optional. Actions based on event queries place their search results in their output for use with, for example, the Send email action and Jira ticket creation. The results are always available in JSON. If you would also like the results in CSV, in the Add action panel, select Export to CSV. When you choose this option, you can select the exact fields to include in the result using the CSV header fields to export field.
For more info, see Add search results as an attachment.
Click Next.
Define or edit the rest of the workflow as needed, resolve any warnings or errors, and save it.
If you're creating a workflow, click Create workflow, select how to create the workflow, and continue to the workflow canvas.
If you're editing a workflow, click the workflow you want to edit to include the action and then click Edit.
Hover over the trigger, condition, or action where you want to add an action and then, to expand the menu, click ,
, or
.
Select Action .
In the Add action panel, find the action based on the query. You can search for it by name or browse for it in the Other (Custom, Foundry, etc.) group. In addition, all actions based on event queries include Event Query at the beginning. Because of that convention, you can search for Event Query to find all the actions based on event queries.
Select the action.
Click Next.
Create a condition to confirm the query found matches.
Be sure that the output includes all the fields you need, especially fields you plan to use in conditions or as action input. See Verify the output fields of an event query action.
Also, fields marked as required must be in the output. The action fails with a schema validation issue if any of the required fields are not in the output. Only mark a field as required if you can guarantee the field is always in the output. If the field is dynamically populated, consider leaving all fields as optional and using a condition to check that the field has a value before using the field. See Verify the event query action output is nonempty before acting on it.
Click Save and exit.
Add a comment to describe the change to the workflow for its version history.
Click Update workflow.
When you update an action based on a query, you update its query configuration to change its data source, name, the query itself, or its schemas. You can't change the type of the query, which is either workflow-specific or CID-specific.
If an action is used in multiple workflows, edits you make for one workflow might not be suitable for another. The steps below guide you through this scenario.
To edit an action based on a query:
Go to Fusion SOAR > Fusion SOAR > Workflows and click a workflow that uses the action.
In the workflow, click Edit.
Click the action to edit.
The Action panel shows the action.
Click Manage event query.
The event query manager opens.
Note: By clicking Back instead of Manage event query, you can replace the action with another action.
Click Edit .
Important: If other workflows use the action, a dialog provides the names of those workflows. If you click Proceed, you can still edit the query configuration. If your changes break any of the affected workflows, you are prompted after you click Continue in the next step to cancel, remove the dependencies, or save the changes in a duplicate.
Edit the query configuration.
Change one or more of the configuration settings:
Set Data view to a different data source to query. The available sources depend on your subscriptions and role. For more info, see Requirements. Here are the options:
All: All event data in Falcon, Forensics, IT Automation, and XDR
Falcon: Endpoint event data and sensor events
Forensics: Triage data collected by Falcon Forensics
IT Automation: Event data generated by Falcon for IT
Third Party: Event data generated by integrated third parties
Change the name for the query in Query name.
Optional. Enter or change a description for the query to help you and others later understand the purpose of the query.
Optional. To provide test data to run the query against, click Upload test data.
If you’re creating an event query action for data that doesn’t yet exist, you can upload test data in a JSON file–up to 20,000 records–to verify the query and create the action.
One pair of brackets around one or more pairs of braces:
[
{ },
{ },
{ }
]
One or more pairs of braces, separated by new lines:
{ }
{ }
{ }
Whenever you upload data, click Run to refresh query results.
Change the query and run it, modifying it and running it until you get the desired results.
For info about the query language, see Get Started with CrowdStrike Query Language.
@ and #,
from the event query output. If you have characters removed, create a
new field in the query using the assignment operator. For example: myTime := @timestamp. Then, reference the new field so that the schema uses the new field name.
table() function. For more info, see table().
Click Continue.
A new window opens with these tabs: Query, Input schema, and Output schema.
Fusion SOAR generates the schemas based on the query and its results.
For more info about these schemas, see Customize an event query action’s input and output.
Optional. To adjust either of the schemas to better define the requirements you want for the query’s input or output, click Input schema or Output schema and make your changes. When done, click Save changes.
Note: For the input schema, you can only edit the metadata. You can’t edit the schema.
For more info, see Add search results as an attachment.
Click Next.
Click Save and exit.
Add a comment to describe the change to the workflow for its version history.
Click Update workflow.
By default, actions based on CID-specific event queries go in the Other (Custom, Foundry, etc.) group. Later, if you want to add the action to another workflow, search for the action by name or browse for it in this group.
You can remove an action based on a CID-specific event query from a workflow–leaving the action available to other workflows.
If you prefer to delete an action so that it’s no longer available, see Delete an action based on a CID-specific event query.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the workflow with the action to remove.
In the workflow, click Edit.
Hover over the action to delete and click Delete .
A warning appears highlighting any nodes affected by the deletion.
Click Delete.
Click Save and exit.
Add a comment to describe the change to the workflow for its version history.
Click Update workflow.
Event count.
With an action based on a workflow-specific query, deleting the action is equivalent to removing the action from the workflow. There is no option to remove the action from the workflow while preserving the action for use in other workflows.
Click the workflow that uses the action you want to delete.
Click Edit.
Hover over the action to delete and click Delete .
A warning appears.
Click Delete.
Click Save and exit.
Add a comment to describe the change to the workflow for its version history.
Click Update workflow.
You can delete an action based on a CID-specific event query so that it is no longer available to any workflow. If the action is used in multiple workflows, you must first remove the action from all workflows that use the action.
If you only want to remove an action from a workflow without deleting the action, see Remove an action based on a CID-specific event query from a workflow.
Use the Actions filter to show only the workflows that use the action you want to delete.
For each workflow with the action, click Open and then select Edit workflow and remove the action from that workflow:
Hover over the action to delete and click Delete .
A warning appears.
Click Delete.
Click Save and exit.
Add a comment to describe the change to the workflow for its version history.
Click Update workflow.
With the action no longer in any workflow, you can delete the action.
To delete the action:
Click Create workflow.
You only need this workflow temporarily. You will discard the workflow after you delete the action.
Start to define a workflow so that you can add an action.
In the Add next panel the workflow canvas, click Action .
Click Manage event query.
The event query manager opens.
Click Delete .
You can now discard this temporary workflow.
Click All workflows and then Discard workflow.
Event count.
With some plugins, such as Jira and ServiceNow, you can add attachments to the tickets.
Event queries place their search results in their output.
If you’re using either of these plugins, you can attach the results to your tickets.
For actions created before October 1, 2024, the results are in the File info field of their output.
For actions created after October 1, 2024, the results are in the new JSON file field of their output. For these actions, you can also get the results in the CSV format, which is then available in the new CSV file field in the action's output. To get the CSV format, after you add an action to a workflow, select the Export to CSV option in the Action panel. When you select this option, the CSV header fields to export
field opens. You can use this field to select the exact CSV header
fields you want in the CSV file. If you don't select any fields in CSV header fields to export, the CSV file contains all of the header fields.
If an action based on an event query was created after October 1, 2024, the Export to CSV option is also available when editing that action.
For info about the roles needed to create and overwrite lookup files, see Requirements.
These are the related actions:
Get lookup file metadata
Use this action to collect info to check whether a file exists before you use the create or overwrite actions.
Conditions that use this metadata can use built-in parameters or expressions that you define using data transformation functions. For more info about these functions, see Fusion SOAR Data Transformation Functions.
Create lookup file
Overwrite lookup file
Fusion SOAR only shows lookup files created or overwritten in Fusion SOAR. However, Next-Gen SIEM shows its lookup files and any lookup files you create in Fusion SOAR.
When you create or overwrite a lookup file, you configure these items:
Repository or view
Select the repository or view that corresponds to the repository or view that the related query action is using.
Name
The name must end with .csv or .json.
Also, the name must be unique within the repository or view you selected.
To ensure uniqueness, you can insert a variable, such as Workflow_execution_ID, in the name.
The name must be at least 5 characters but not more than 100 characters.
Content type
Define the lookup file by selecting a file created by a previous action, by specifying a variable, or by entering plain text inline.
The accepted formats are CSV and JSON.
The maximum size file that you can upload is 10 MB.
The maximum size for the value of a variable is 10 MB.
The maximum amount of text that you can enter inline is about 1 MB.
The JSON formats accepted are the same as the JSON formats accepted by Falcon LogScale. Here are examples of those formats:
Object-based example
{
"1": { "name": "chr" },
"2": { "name": "krab" },
"4": { "name": "pmm" },
"7": { "name": "mgr" }
}
Array-based example
[
{ "userid": "1", "name": "chr" },
{ "userid": "2", "name": "krab" },
{ "userid": "4", "name": "pmm" },
{ "userid": "7", "name": "mgr" }
]
Example
Assume we have log entries that contain status codes for a web server. These codes are in the field named status. To provide a name field to match names to the status codes, we could upload a lookup file named status_codes.csv with content that corresponds to this table:
| code | name |
| 200 | OK |
| 400 | Bad Request |
| 401 | Unauthorized |
| 500 | Internal Server Error |
To use that lookup file in a query, the query would then include a like line this one:
groupby([status]) | match(file="status_codes.csv", column=code, field=status, include=name)
When the query is run, the results might look something like this:
| status | _count | name |
| 200 | 777 | OK |
| 400 | 1 | Bad Request |
| 500 | 10 | Internal Server Error |
Make sure your event query action provides the output you expect. Verifying the output is particularly important when you plan to use it in a condition or loop or as input to another action.
You can check the output at various times:
When you're creating an action based on an event query, after you click Continue, click Output schema.
When you're editing an action based on an event query, click Output schema.
Go to Fusion SOAR > Fusion SOAR > Workflows and click a workflow that uses the action.
In the workflow, click Edit.
Click the action to show the Action panel.
For more info about JSON schema, see Manage action input, action output, and on-demand triggers.
Every event query action includes an Event count field. This field indicates how many events the query found.
Before acting on the output of an event query action, verify the query found at least one event by setting a condition on the Event count field.
In the condition, use these settings:
Parameter: Event count
Operator: is greater than or equal to
Value: 1
The following image shows these settings.
To build conditions or actions based on the results of an event query, you must add a loop to iterate through the search results. For info about looping, see Looping in workflows.
For info about how to find fields to loop over in the search results, see Verify the output fields of an event query action.
In the example below, we pass each instance of the AID returned in the event query results to the Get connected USB drives action’s Device ID field.
Fusion SOAR generates schemas for the action’s input and output based on the query and its results. You can modify these schemas to better suit your needs. Use the input schema to define requirements for your action’s input and to help users of the action be successful when using the action. For example, a schema can validate input or help the user understand the variables to pass when using the action. Similarly, the output schema helps the user of the action understand what output to expect from the action.
Consider this example query:
#event_simpleName = SensorHeartbeat //Convert timestamp to epoch time |formatTime(format="%Q", field=@timestamp, as="epoc_timestamp") //Compare today and last time AgentOnline event was received | last_seen:= ((now() - epoc_timestamp)/86400000) | last_seen:=math:ceil(last_seen) | last_seen >?last_seen | table([aid, ComputerName, last_seen])
This query returns a table of all the hosts that were last seen more than the specified number of days ago–based on the ?last_seen syntax. Because Fusion SOAR
derives input and output schemas from the search query and search
results, you can modify those schemas to be more specific. In this
example, the input schema is the last_seen field, which is an integer type.
Here’s the input schema that Fusion SOAR derived:
To customize the properties of the field, use the JSON Schema editor. In this case, we are going to make these changes:
Default value: Change from 2 to 7
Description: Add a prompt for those who use this action, Input the number of days
Minimum number of days: Prevent values less than 1
Maximum number of days: Prevent values greater than 90
For more info about JSON schema, see Manage action input, action output, and on-demand triggers.
Here’s the modified input schema:
For the output, the output schema includes the column headers from the table produced by the query: aid, last_seen, and ComputerName. Certain Fusion SOAR actions require some input fields to be in certain formats. For this schema, we change the format for the aid field to Sensor ID. Then the action’s output can be used as input to actions that require a format of Sensor ID, such as Contain Device, Get Device Details, and Add Device to Watchlist.
Here’s the output schema that Fusion SOAR derived:
Here’s the modified output schema:
To edit an action's schemas, edit the action as explained in Edit an action based on an event query.
To limit the output of a query, you can use filters in the query itself. If you don’t use filters though, Fusion SOAR generates the output schema using all the fields returned in the query. You can then trim fields that aren’t useful to you from the output schema. To trim the schema, use either the JSON schema form builder or click JSON Schema editor.
Consider this example query:
#event_simpleName = ActiveDirectoryAccountPasswordUpdate
The query returns ActiveDirectoryAccountPasswordUpdate events. Fusion SOAR produces the following output schema:
Reducing the fields, the modified schema has only these fields:
To edit an action's schemas, edit the action as explained in Edit an action based on an event query.
For more info about JSON schema, see Manage action input, action output, and on-demand triggers.
Cloud HTTP Request
CrowdStrike HTTP Request
Send API requests to CrowdStrike API endpoints using your tenant’s authentication context.
On-Premises HTTP Request
Send API requests to internal or restricted network endpoints through a configured on-premises host group.
When you use these actions, you can take advantage of inline testing, dynamic variable injection, and conditional branching to build powerful, data-driven workflows. These actions are available within Fusion SOAR without requiring a Foundry app or app templates.
For info about the requirements to use these actions, see Requirements.
In a workflow, at the point where you want to make an API call, add the desired action. In the Add action panel, a list of Cloud HTTP Request action templates is organized by vendor and use case. Select a template or click Create HTTP request to create a custom HTTP request action from scratch.
If you are adding the CrowdStrike HTTP Request action and plan to use API key authentication, you must complete these steps before you add the action:
Determine the API endpoint to use
Review the OpenAPI spec for your cloud: US-1 | US-2 | EU-1 | US-GOV-1 | US-GOV-2
Find the API endpoint.
Make note of the header for the section that
contains that endpoint. You'll use the header name as the API scope in
the next step. For example, to use the /devices/combined/devices/v1 endpoint, you would use its section header hosts for the API scope.
Create an API key
In the Falcon console, go to Support and resources > API clients and keys.
Click Create API client.
Enter a name and description, and then select the permissions for the APIs you want to access.
For example, to make a request for the devices API endpoints, select Read in the Hosts row.
Copy the API client ID and secret to use when you configure the action.
To configure the desired action, enter values for these fields:
Authentication:
When choosing an authentication option, you can create a new authentication, use an existing one, or use no authentication.
When you create an authentication, that authentication is then available for subsequent actions.
After you configure an authentication, you cannot change it.
Tip: When you
use API key authentication, you can add a prefix to the secret key. You
must add a prefix if the API requires that the key have a prefix of Bearer or token in the Authorization header. Add this prefix in the API secret key field, following these examples:
Bearer <your_API_key>
or
token <your_API_key>
When you use OAuth 2.0 Auth Code authentication, you'll need to log in to the third-party API specified by the authorization URL when you click Grant access. If that API requires a redirect URI when setting up your credentials, use the URI that corresponds to your CrowdStrike cloud:
US-1: https://api.crowdstrike.com/webhooks/53189b3998004bcb95973739717b8291/v1
US-2: https://api.us-2.crowdstrike.com/webhooks/95ff048d66304364a421b0fd0d1bbcd4/v1
EU-1: https://api.eu-1.crowdstrike.com/webhooks/5c1d08398c894f998eee6f0e66639411/v1
Configure a response policy with Real Time Response (RTR) enabled. For more info, see Fusion SOAR actions that use RTR
Static host group
Required. Because the API calls might handle sensitive data, assign a static host group to execute the request securely.
Use only static host groups--with explicit hostnames or IDs--that are secured, have limited access, and meet both security and compliance controls for monitoring and auditing.
Also ensure proper firewall rules are configured to allow communication between the host and the target API.
As a best practice, limit these host groups to no more than 20 hosts.
For more info, see Host and Host Group Management and Falcon Firewall Management.
Trust any certificate (insecure)
Optional.
Proxy URL
Optional. Enter the URL of a proxy server used in your environment to route HTTP requests. Here are the valid formats:
http://<host>:<port>
https://<host>:<port>
private_key_jwt authenticationSet the HTTP method, such as GET, POST, PUT, or DELETE, and enter the API endpoint URL.
Body:
Depending on the resource being requested, you might need to define the body to describe the resource being created or modified. To understand what is needed, see the documentation for the API you are using.
For the request body, you can select JSON, Plain text, CSV, or No request body.
Headers:
Headers provide additional information about the resource being fetched or about the client making the request.
You can enter variables by clicking {x}.
Query:
Depending on the resource being requested, you might need to define query parameters. To understand what is needed, see the documentation for the API you are using.
These parameters are appended to the end of the request URL and appear after a question mark (?) with a key and value. Multiple query parameters are separated by ampersands (&).
You can enter variables by clicking {x}.
Response:
Depending on the method, URL, and query parameters, the response from the API includes a header and a body. To understand the response, see the documentation for the API you are using. The header contains information about the response, such as the HTTP response code, content-type, and character encoding. The body contains the content of the resource being requested.
The response size can be up to 10 MB.
To manage the data in the response, you can generate a schema from a sample response payload or you can manually define a schema based on the expected structure.
Advanced Configuration:
To adjust the request timeout or the retry attempts, click Add advanced configuration.
Timeout
You can set timeout values for the action itself and for each retry of the HTTP request in the action.
By default, the action times out if it doesn't get a response within 30 seconds.
Retry Logic
For the retry logic, the Delay strategy value determines how the delay increases from the Initial delay value that you specify. Values are as follows:
Fixed
The time between the attempts is always the same.
For example, for an initial delay value of 5, the delays are always 5 seconds apart.
So the attempts are at 5 seconds, 10 seconds, 15 seconds, 20 seconds, and 25 seconds.
Linear
The time between the attempts increases by the initial delay value each attempt.
For example, for an initial delay value of 5, the subsequent delays are 10, 15, 20, and 25 seconds apart.
So the attempts are at 5 seconds, 15 seconds, 30 seconds, 50 seconds, and 75 seconds.
Exponential
The time between the attempts doubles each attempt.
For example, for an initial delay value of 5, the subsequent delays are 10 seconds, 20 seconds, 40 seconds, and 80 seconds apart.
So the attempts are at 5 seconds, 15 seconds, 35 seconds, 75 seconds, and 155 seconds.
To retry only for certain status code values, select the desired values in Retry on status.
After you set up the request, verify that the request works and returns the expected response by clicking Test.
When you are satisfied that the request works as expected, click Next to add the action to the workflow.
To help you troubleshoot executions of workflows with this action, the execution log shows the response payload and the status code.
The Cloud HTTP Request and On-Premises HTTP Request actions use these egress IP addresses.
| CrowdStrike Cloud | Egress IP Addresses |
| US-1 |
|
| US-2 |
|
| EU-1 |
|
| US-GOV-1 |
|
Fusion SOAR can write data, such as output from an RTR script or an HTTP-based action, by using the Write to log repo action. The Write to log repo action saves output from either an On demand trigger or an action. You can then query the data through Investigate > Search > Advanced event search or use the data in other workflow actions.
For example, you could use this feature to write the output of an RTR script to a repository to access and search later.
To query the data interactively, go to Investigate > Search > Advanced event search and set the repo to Fusion by using this text as the first line in your query:
#repo = fusion
Then define and run your query. For more info, see Advanced Event Search.
Alternatively, later actions, either in the same workflow or other workflows, can use an event query with the Data view set to All to process that data. If you're querying the data within the same workflow and the data is not yet available, add a Sleep action. For more info, see Event queries, or saved searches, as workflow actions.
In Flight Control environments, workflows in a parent CID can write data into the parent CID’s own data view but not to any of the data views in the child CIDs. However, workflows in a parent CID can query data in the data views in the child CIDs.
Create or edit a workflow.
Set up the data to save. You have these options:
The workflow uses an On demand trigger followed immediately by a Write to log repo action.
In the schema builder, add a property, such as raw_json, of type string.
Click the property to show its basic properties.
Set Format to Raw JSON string.
For one record:
{"foo":"bar"}
For multiple records:
[{"a":"b"},{"c":"d"}]
Then the Write to log repo action saves the data.
The workflow uses any kind of trigger and includes a Write to log repo action.
You define the action's input JSON schema using one or more of these fields. The combination of the data from all 3 fields is saved.
Data to include
Click in this field and select one or more items, one at a time.
To simplify querying in the repo, select Remove action prefix.
Raw JSON data
Select an option. These options are only available if the trigger or previous actions have schemas that define a property of type string with Format set to Raw JSON string.
Custom JSON data.
Enter your JSON directly in this field.
Define or edit the rest of the workflow as needed, resolve any warnings or errors, and save it.
You can configure workflows to save output for you to view in the Falcon console or to act on programmatically.
If you define workflow output for an on-demand workflow, any workflow that runs the on-demand workflow can then use the output of that on-demand workflow.
When you create or edit a workflow, click Settings, followed by Select output data, and then select the output to include in the workflow executions.
This output is then available in the Execution log in the workflow's execution details.
To access that output, you have these options:
Go to Fusion SOAR > Fusion SOAR > Workflows , click the Execution log tab, click the workflow, go through the execution details–where you can view the output or download it manually.
Download the execution details and its workflow output using APIs discussed in Fusion SOAR Workflow APIs.
Workflows typically fail because a host is offline or some other momentary connection issue. Under some conditions, Falcon retries failed actions up to a certain limit.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Execution log tab.
For a failed execution in the table, click Open menu and select Retry execution.
Fusion SOAR provides flexible error modeling capabilities to meet your specific requirements. You can define custom error responses and remediation steps within your workflows. For example, if a single action fails, your workflow can handle the failure and continue processing based on your defined error handling configuration.
Example modeling scenarios:
When any workflow fails, automatically send a detailed email containing the workflow execution URL and specific error information for debugging purposes.
If a service issue causes an action to fail or timeout, configure the workflow for one of these outcomes:
Alert the workflow author to review and retry the action.
Continue the workflow execution without the failed action's data.
During loop processing, if an action fails within an iteration, configure the workflow for one of these outcomes:
Send a Slack notification about the specific failure.
Continue processing with the next loop iteration.
In the Falcon console, go to Fusion SOAR > Fusion SOAR > Workflows .
To edit an existing workflow, click Open menu︙ , then select Edit Workflow. To create a new workflow, click Create workflow. For more info, see Create a workflow.
In the Add next panel or on the workflow canvas, click Customize playbook and select Condition.
In the Condition panel, turn on Advanced mode. For more info, see Conditions in Advanced mode using data transformation functions.
In the Workflow data panel, scroll to Workflow > Execution.
Drag Workflow execution errors into the expression editor.
Create a CEL expression to check for errors based on your use case. For more info about CEL expressions, see Fusion SOAR Data Transformation Functions. Examples of expressions you can create:
data['Workflow.Execution.Errors'].size() > 0
data['Workflow.Execution.Errors'].exists(e, e.Name == 'Timeout')
data['Workflow.Execution.Errors'].filter(e, e.Handled == "")
Optional. Input custom JSON sample data to test the expression.
Click Next.
Important considerations:
When building error handling into your workflows, consider using the Test feature to validate your error handling paths before enabling the workflow. For more info, see Test and debug your workflow.
Review the Execution data to ensure errors are being handled as expected. For more info, see View the execution log.
Errors stop subsequent actions from executing by default. Actions won’t run if there are unhandled errors in the workflow. To ensure your action executes, handle any errors by using the Continue on failure setting or Resume after Error action. For more info, see Resume workflow execution after an error occurs.
Follow these steps to configure your workflow to resume execution after an error has occurred.
In the workflow canvas, create an Action.
To enable this feature, on the Action Execution settings tab, select the Continue on failure checkbox.
Click Next.
Optional. Add sequential actions or conditions to your workflow.
Save and publish your changes to the workflow.
For complex workflows with multiple parallel branches, you can use the Resume after Error action instead. This action shows where error handling occurs, making branch merging clearer in the workflow canvas. It provides the same functionality as the Continue on failure option.
Automate ServiceNow incident generation with the ServiceNow ITSM SOAR Actions plugin. You can customize which data to include on the incident. For example, to add host identifiers, configure the Get ServiceNow CI Computer action to include the ServiceNow configuration item (CI) ID and CI name fields.
Insert sequential actions to retrieve CI details before setting up the Create ServiceNow incident action:
When configuring actions for a workflow, search for and then select the Get ServiceNow CI Computer action.
From the Account dropdown, select the ServiceNow account to pull the CI details from. Set the Hostname dropdown. Then click Next. The action appears on the workflow canvas.
Hover over the action where you want to add the next action and then, to expand the menu, click and then Action
.
In the Add action panel, search for and then select Create ServiceNow incident.
From the Account dropdown, select the same ServiceNow account as before.
Fill out the fields to include on the ServiceNow incident.
In the Data to include field, select the fields to add to the incident, including ServiceNow CI ID and ServiceNow CI name.
When you’re finished, click Next.
Configure workflows to create prefilled Jira tickets. You can customize Jira fields while setting up a notification action.
When configuring actions for a workflow, search for Create Jira issue and then expand the Atlassian list and select the Create Jira issue action.
From the Account dropdown list, select a Jira configuration.
From the Project dropdown list, select the project you want to assign the Jira ticket to.
Optional. Enter labels to attach to the Jira ticket.
Set a priority for the Jira ticket.
From the Issue type field, select a Jira issue type.
Enter a description and summary for the Jira ticket.
In the Data to include field, select which info to include on the ticket.
When you’re done, click Next and complete the workflow setup.
Build complex workflows by adding sequential actions to an existing Create Jira issue action. For example, you can attach a file associated with an endpoint detection to a Jira ticket to give additional context to an issue.
Add relevant comments to a Jira ticket generated within a workflow.
On the workflow canvas, find the Create Jira issue action and add an action after it.
From the action panel, find and select the Add Jira comment action.
From the Account dropdown list, select the same Jira configuration used in the Create Jira issue action.
From the Issue dropdown list, select Jira issue ID action.
In the Body field, enter text to include in the comment field on the Jira ticket.
In the Data to include field, select the Falcon data you want to include on the ticket.
When you’re done, click Next and complete the workflow setup.
For workflows triggered by endpoint or custom IOA detections, retrieve files associated with the detection and attach them to Jira tickets.
On the workflow canvas, find the Create Jira issue action and add an action after it.
From the action panel, find and select the Real Time Response Get file action.
Click Next.
After the Get file action, add another action.
From the action panel, find and select Add Jira attachment.
From the Account dropdown list, select the same Jira configuration used in the Create Jira issue action.
When you’re done, click Next and complete the workflow setup.
Generate Jira tickets and related Jira subtask tickets within a single workflow branch.
On the workflow canvas, find the Create Jira issue action and add an action after it.
From the action panel, find and select the Create Jira issue action.
When setting up the Jira template, in the Issue type field, select Subtask.
When you’re done, click Next and complete the workflow setup.
Configure workflows to integrate with third-party security platforms through prebuilt Falcon Foundry app templates. Available integrations include threat intelligence, identity and access management, DevOps security, and email security. You can use the prebuilt integrations as-is or customize them to your needs.
To create an integration action, you deploy a Foundry app template and then release and install the app to make the custom action available in Fusion SOAR.
Requires one or more of these subscriptions:
Default roles:
App Developer plus Workflow Author
Falcon Administrator
Tip: You can also go to Foundry > Foundry > Templates .
The Foundry Templates page opens.
The message Deployment in progress appears.
When the app has been deployed, the App overview page opens.
The message Releasing deployment appears, followed by the message Deployment released successfully.
Your app's catalog details page opens, showing the list of releases.
When your app has been installed, a notification appears. API operations in the app are now available as third-party integration actions.
Create a condition to confirm the API operations output.
Be sure the output includes all the fields you need, especially fields you plan to use in conditions or as input to other actions.
Also, fields marked as required must be in the output. The action fails with a schema validation issue if any of the required fields are not in the output. In Foundry, only mark a field as required if you can guarantee the field is always in the output. If the field is dynamically populated, consider leaving all fields as optional and then using a condition in Fusion SOAR to check that the field has a value before using the field.
For more info about adding an action to a workflow, see Workflow actions. For more info about Foundry apps, see Foundry.
To edit a third-party integration action, you must edit its source Foundry app. You can add, edit, or remove API operations. You can also add other app capabilities.
A status of Operation succeeded (200) indicates a successful test.
If you have a workflow created in a Flight Control environment, the workflow can apply to multiple CIDs. To change the workflow’s CIDs, edit the workflow and click Edit CID selection.
For info about how to make other changes to the workflow, see Edit a workflow element.
Visit the Audit Log to review all changes made to your workflows.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Audit log tab.
Filter the list by Workflow name, Action (what was changed), or Modified by (who made the change).
Click a column header to sort the table.
Charlotte AI supports workflow creation in Fusion SOAR.
With Charlotte AI, you can create Fusion SOAR workflows without extensive Fusion SOAR expertise and without needing to manually select each trigger, condition, or action. Describe what you want to automate and Charlotte AI will generate the appropriate workflow. Charlotte AI’s response contains a workflow preview that you can open in Fusion SOAR for editing or saving. You can also click Show response details to see the steps that Charlotte AI took to search Fusion SOAR components and build the workflow.
For example, you can ask Charlotte AI to:
Create a workflow that
runs every night at 11pm Eastern that runs a device query to see if any
hosts are in RFM. If a host is in RFM, the workflow applies the Falcon
grouping tag FalconGroupingTag/RFM to the host.
Create a workflow that
sends an email whenever there is an EPP detection with a severity of
Critical. The email should be sent to [email protected] with a
subject of "Critical EPP detection" and the email body should contain a
summary of the detection.
Charlotte AI can also help you find specific Fusion SOAR actions, triggers, and prebuilt playbooks. For example, you can ask: Is there a Fusion SOAR playbook for Identity Protection to identify and reset compromised passwords?
Configure Fusion SOAR workflow actions to leverage the capabilities of Charlotte AI.
Create a workflow action to use Charlotte AI's LLM Completion capabilities in your Fusion SOAR workflows.
Ask Charlotte AI to investigate a case from a Fusion SOAR workflow action.
Ask Charlotte AI to summarize a detection using a Fusion SOAR action.
Use two Fusion SOAR actions to initiate an investigation.
Fusion SOAR provides a dashboard so you can see activity for the last 90 days at a glance.
To see the dashboard, go to Fusion SOAR > Fusion SOAR > Dashboard or to Next-Gen SIEM > Fusion SOAR > Dashboard .
The dashboard provides this info for the last 90 days:
Executions from detections: A count of the detections that Fusion SOAR has responded to
Execution trend: Trend lines that show completed and failed workflow executions
Top triggers executed: Bar chart of the triggers used most to start workflow executions
Approval required: List of workflows that require approval
Next scheduled workflow: The workflow scheduled to run next
Recent executions: Table of the workflows most recently executed
On demand workflows: List of workflows you can run on demand
In addition, the dashboard links to more info whenever available.
Visit the Execution log to review every time your workflows have been triggered in the last 90 days.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Execution log tab.
Filter the list by Workflow name, Execution date, Execution status, Trigger, and Action.
detectio returns no results while detectio* returns results for both detection and detections.
Click a column header to sort the table.
Click View execution details for any execution to open a quick view of information about the execution, including a link to the triggering event, the action taken, and the Execution Status (whether the workflow was fully completed).
The Execution log provides details about the actions taken in each workflow execution. When a workflow executes Real Time Response commands or initiates a VirusTotal hash lookup, the Output field shows what was returned, and the option to download is available for successful Get file actions.
Examine the complete view of a workflow’s executions to see which conditions were met each time it ran and where it might have failed.
Go to Fusion SOAR > Fusion SOAR > Workflows and click the Execution log tab.
For an execution in the table, click Open menu for an execution and select View execution.
In the panel, switch between the workflow’s execution records and see information about each execution, including a link to the triggering event and details about each action.
In the canvas, see a clear map of which conditions were met, which actions were taken, and where the workflow failed, if applicable.
When you have workflows with loops, you also see loop-specific info such as:
Loop source type (the item being looped through)
Status for the entire loop
Status for each iteration
Status for any nested loops
For sequential loops, the reason the loop stopped
For workflows, the possible status values are shown in this table.
| Status | Description |
|---|---|
|
Completed |
The workflow executed successfully and completed. |
|
Failed |
The workflow failed to execute properly. |
|
In progress |
The workflow is currently executing. |
|
Action required |
The workflow is pending approval before it can start. The status is the result of an action that requests input. You can grant approval in the execution log that shows all workflows by clicking View execution details for the workflow and approving. Alternatively, you can grant approval in the workflow's execution log by clicking the action and approving. |
For actions, the possible status values are shown in this table.
| Status | Description |
|---|---|
|
Completed |
The action executed successfully and completed |
|
Failed |
The action failed to execute properly |
|
In progress |
The action is currently executing or being retried |
|
Pending |
The action status hasn’t executed yet |
|
Skipped |
The action skipped execution |
For loops, possible status values are shown in this table.
| Status | Description | Appearance in the canvas |
|---|---|---|
|
Completed |
The loop ran successfully |
Green outline and a green check mark icon |
|
Failed |
The loop ran unsuccessfully If the action or condition within a nested loop has failed, then the parent loop also has a Failed status |
Red outline and a red x icon |
|
In progress |
The loop is currently running |
Gray outline and an in-progress icon |
|
Pending |
The loop didn't run or hasn't run yet Possible causes:
|
Gray outline The details panel indicates no executions were performed |
|
Skipped |
The loop didn’t run because a trigger wasn’t activated or a condition wasn’t met |
Gray outline with an x icon |
You can query your Falcon Fusion SOAR workflow execution logs using Next-Gen SIEM's advanced event search. To get started, go to Next-Gen SIEM > Log management > Advanced event search . You can use these queries to build charts and dashboards for the workflows you're most interested in monitoring.
Workflow execution log queries use the standard advanced event search query syntax plus a set of fields specific to workflow data.
The Fusion SOAR Execution dashboard uses Next-Gen SIEM's advanced queries to show your workflow execution log data in a variety of ways. This dashboard is available to all Falcon Insight LogScale/Next-Gen SIEM customers.
To see the dashboard, go to Next-Gen SIEM > Log management > Dashboards , then search for Fusion SOAR Execution.
The charts in this dashboard include:
Execution trend: An area chart with total counts of succeeded and failed executions over time
Executions over time: An area chart with total counts of executions broken down by category
Human input response: A pie chart with a breakdown of executions requiring a human response
Mean time to trigger: The mean time it takes for in execution to trigger, in seconds
Recent executions: A table of the most recent executions and their status
Top actions executing across all workflows: A table of the top actions executed across all of your workflows
Top failing actions across all workflows: A table of the top failing actions across all of your workflows
Top triggers executed: A bar chart with total counts of the top 10 executed triggers
Total Detections Resolved by SOAR: A total count of the detections resolved by SOAR
Total executions: A total count of executions
Workflow action executions grouped by vendor and use case: A pie chart of workflow action executions broken down by vendor and use case
Workflow executions by action categories: A pie chart of workflow executions broken down by action category
Workflow executions by trigger categories: A pie chart of workflow executions by trigger categories
Your workflow log data retention is based on your paid subscription retention period.
These sample queries use the advanced event search CrowdStrike Query Language. For more info, see Advanced Event Search.
This query returns total counts of the most commonly executed triggers.
#repo=fusion | execution_log_type = summary AND execution_log_subtype = start | rename(field="trigger.data.Trigger.Category", as="Trigger category") | top(["Trigger category"]) | rename(field="_count", as="Count")
This query returns a time chart that shows what trigger categories occur the most at specific times.
#repo=fusion | "execution_log_type" = summary AND execution_log_subtype = "start" | not "parent_execution_id" = * AND not root_execution_id = * | timechart(span=1d, series=trigger.data.Trigger.Category)
This query returns total counts how many workflow executions there were for each trigger category and their status.
#repo=fusion | execution_log_type = summary AND execution_log_subtype = "end" | rename(field="trigger.data.Trigger.Category", as="Trigger category") | groupBy(["Trigger category", status])
This query returns details and durations of recent executions.
#repo=fusion // only look for execution summary end event | execution_log_type=summary AND execution_log_subtype = "end" | not "parent_execution_id" = * AND not root_execution_id = * | start := parseTimestamp(field="start_timestamp") | end := parseTimestamp(field="end_timestamp") | timeProcessed := end - start | sort(start_timestamp, limit=200) | formatDuration(timeProcessed, precision=2) | format(format="%,.19s", field=[start_timestamp], as=start_time) | format(format="%,.19s", field=[end_timestamp], as=end_time) | rename(field="trigger.data.Trigger.ObservedTime", as="detect_time") | rename(field="trigger.data.Trigger.Category", as="category") | rename(field="trigger.data.Trigger.SourceEventID", as="source_event_id") | rename(field="trigger.data.Trigger.SourceEventURL", as="source_event_url") | select([definition_name, definition_id, definition_version, execution_id, category, status, detect_time, start_time, end_time, timeProcessed, source_event_id, source_event_url]) | rename(timeProcessed, as=Duration)
This query returns the average meantime it took to trigger workflows in the selected time period.
#repo=fusion // only look for execution summary start event | execution_log_type=summary AND execution_log_subtype = "start" | not "parent_execution_id" = * AND not root_execution_id = * // calculate mean time from detect to trigger | detect_time := parseTimestamp(field=trigger.data.Trigger.ObservedTime) | trigger_time := parseTimestamp(field=trigger.data.Workflow.Execution.Time) | time_to_trigger := (trigger_time - detect_time ) / 1000 | avg(time_to_trigger, as=mttt) | format(format="%,.2fs", field=[mttt], as=mttt) | select([mttt])
This query returns execution summaries and activity execution details, including loop execution information.
#repo=fusion | execution_log_type=summary AND execution_log_type = "details" | (execution_id=?execID OR root_execution_id=?execID) | sort(start_timestamp, order=asc)
This query returns execution summaries based on specific trigger category types. Replace ?category with a specific category type to see summaries for that category.
#repo=fusion | execution_log_type=summary | trigger.data.Trigger.Category = ?category
Use these tables to find workflow execution fields that you can query.
| Workflow execution field | Description |
|---|---|
| action | Data object that returns activity detail results as JSON |
| cid | Customer ID |
| definition_id | Workflow definition ID |
| definition_name | Workflow definition name |
| definition_version | Workflow definition version |
| end_timestamp | Timestamp when the workflow execution ended. Example: 2024-12-11T22:21:27.691Z |
| execution_id | Execution summary ID |
| execution_log_subtype | Execution log subtype: start or loop_start |
| execution_log_type | Execution log type: summary or details |
| execution_log_version | The version of the execution log |
| parent_execution_id | Loop execution summary parent execution ID |
| root_execution_id | Loop execution summary root execution ID |
| start_timestamp | Timestamp when the workflow execution started. Example: 2024-12-11T22:21:11.196Z |
| status | Workflow execution status: Succeeded or Failed |
| trigger |
Data object that returns trigger detail results as JSON. |
To get notifications about workflows being created, updated, or deleted, create a workflow to monitor these operations. In the workflow, use the Audit event > Workflow trigger. After the trigger, create a condition that checks whether the Workflow operation parameter includes any of the desired operations. Then choose a notification action, such as Send email or Send Slack message.
The Webhook Ingest Errors dashboard helps you monitor and troubleshoot ingestion issues across all webhook-driven workflows. To accelerate triage and debugging, this dashboard provides real-time visibility into failed executions, error patterns, and trigger health.
This dashboard requires workflows that use inbound webhook in their triggers. For more info, see Create and manage triggers based on inbound webhooks.
For info about the roles related to these workflows, see Requirements.
To see the dashboard, go to Next-Gen SIEM > Log management > Dashboards , search for Webhook Ingest Errors, and then click Webhook Ingest Errors in the results.
Using the dashboard, you can perform tasks such as these:
Detect and resolve misconfigured webhook integrations, such as missing headers or invalid schemas
Monitor webhook delivery for email phishing, inbound triggers, and other event sources
Validate delivery and schema behavior in staging or production
Collect info for troubleshooting with external vendors
The dashboard provides these charts:
Total Failures Count: A count of the total number of webhook trigger failures over the selected time period
Error by Subtype: Pie chart of the failure types, such as authentication errors like missing headers, invalid schema, and so on
Subtype Errors Over Time: Time series that shows how different error types trend over time to help identify spikes or regressions
Error Subtype Details: Table view with specific error categories, counts, and detailed error messages, such as failed to find header
You can filter these charts by webhook ID, webhook Name, and time window, either live or historical.
Also, the dashboard supports read/write access, and you can clone it to create custom views across multiple webhooks.
For information about techniques to investigate other webhook issues, see Inspect webhook issues.
Automate complex workflows based on prebuilt playbooks for common workflows that you can customize for your organization.
Automate complex workflows based on prebuilt Fusion SOAR playbooks. Fusion SOAR includes playbooks for common workflows that you can customize for your organization’s needs to maximize efficiency.
Subscriptions: Available playbooks depend on your Falcon subscriptions
Sensor support: All supported Falcon sensors for Windows, macOS, and Linux
Roles:
Falcon Administrator
Workflow Author
When you set up a playbook, you often need to customize certain actions.
Click Create workflow.
In the Create workflow window, under Workflow Playbooks, find and select a workflow, and then click Next.
The playbook opens in the workflow canvas.
Click Customize playbook and follow the setup steps.
Available playbooks depend on your CrowdStrike subscriptions.
Transform data in actions, conditions, loops, loop output, and workflow output.
In actions, condition nodes, While loop conditions, loops, loop output, and workflow output, you can transform data and write more expressive conditions using data transformation functions based on Common Expression Language (CEL) functions, as defined in google / cel-spec / Language Definition.
With data transformation functions, you can complete tasks such as these:
Advanced conditions and variable comparison
Example: Compare an IP address from event search results with one retrieved for a particular host.
Extracting properties with specific key names from a JSON object
Example: Extract a specific property from a JSON object.
List management
Example: In a sequential loop, check whether hosts meet specific criteria and, if so, update a custom variable to add them to a new list for further processing.
Variables for non-strings
Example: Loop over a list of hosts and add them to a host group without exceeding a limit of 5 hosts per group. The host group query action returns the size of the host group. You then update a custom variable with that value and use it in a while condition on their loop.
Map lookup
Example: Assign tags to a detection based on the SHA of the triggering file. Create a custom variable map keyed by the SHA with the tag name as the value. Then assign tags to a detection based on the SHA of the triggering file by performing a map lookup and adding the corresponding tag.
Timezone adjustments
Example: Update event query results to reflect a user's timezone.
Addition transforms
Example: Loop over a list of hosts and check the host groups; if the host is in a certain group, increment a count.
Regex string extraction
Example: Extract a hostname from a URL using a string regex transform.
Example: Check whether an IP address is external.
Data formatting
Example: Format an array of users into a bulleted list with HTML markup to send in a message such as email
You can use data transformation functions in any field that has an associated Expressions and data transformations icon. To use data transformation functions, click Expressions and data transformations
to open the expression builder. Using the builder, you can choose a
view that helps you create your expression or a view that lets you code
the expression directly.
You can also use the expression builder to describe your data transformation goals to the Charlotte AI Data Transformation Agent in everyday language. Charlotte AI understands your requirements in plain language and generates the appropriate expressions for your data transformation needs. For more info, see Conditions in Advanced mode using data transformation functions.
CrowdStrike provides the standard CEL functions as well as some extensions.
Standard functions
For info about the standard functions, see Standard Definitions.
Extensions
For info about the supported extensions, see Strings, Math, Lists, Sets, and TwoVarComprehensions.
CrowdStrike extensions
For info about these extensions, see CrowdStrike extensions.
CrowdStrike provides the extensions in the following sections.
The table is sorted logically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.base64.encode(<string>) |
cs.base64.encode('hello') |
"aGVsbG8=" |
Base64 encodes the value |
cs.base64.decode(<string>) |
cs.base64.decode('aGVsbG8=') |
"hello" |
Base64 decodes the value |
Example use cases:
An API call needs to return base64-encoded data
An API call requires data to be base64-encoded
The table is sorted logically.
| Signature | Example | Example result | Description |
cs.cidr.valid(<string>) |
cs.cidr.valid('10.0.0.0/8') && cs.cidr.valid('::1/128') |
true |
Returns true if a CIDR |
cs.cidr.ip(<string>) |
cs.cidr.ip('192.168.0.1/24') |
"192.168.0.1" |
Returns the IP-address representation of the CIDR |
cs.cidr.containsCIDR(<string1>, <string2>) |
cs.cidr.containsCIDR('192.168.0.0/16', '192.168.10.0/24') |
true |
Returns true if the CIDR contains the provided CIDR |
cs.cidr.masked(<string>) |
cs.cidr.masked('192.168.0.1/24') |
"192.168.0.0/24" |
Returns the CIDR with a masked prefix—that is, the. canonical form of the network |
cs.cidr.prefixLength(<string>) |
cs.cidr.prefixLength('192.168.0.0/16') |
16 |
Returns the prefix length of the CIDR in bits—that is, the number of bits in the mask |
The table is sorted alphabetically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.csv.parse(<string>) |
cs.csv.parse('name,age\nbob,22') |
[["name","age"],["bob","22"]] |
Parses a CSV string and returns a list of lists |
cs.csv.parseMaps(<string>) |
cs.csv.parseMaps('name,age\nbob,22') |
[{"name":"bob","age":"22"}] |
Parses a CSV string and returns a list of maps |
The table is sorted alphabetically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.hash.md5(<string>) |
cs.hash.md5('hello') |
"5d41402abc4b2a76b9719d911017c592" |
Calculates the MD5 hash |
cs.hash.sha1(<string>) |
cs.hash.sha1('hello') |
"aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d" |
Calculates the SHA1 hash |
cs.hash.sha256(<string>) |
cs.hash.sha256('hello') |
"2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824" |
Calculates the SHA256 hash |
cs.hash.sha512(<string>) |
cs.hash.sha512('hello') |
"9b71d224bd62f3785d96d46ad3ea3d73319bfbc2890caadae2dff72519673ca72323c3d99ba5c11d7c7acc6e14b8c5da0c4663475c2e5c3adef46f73bcdec043" |
Calculates the SHA512 hash |
The table is sorted logically.
| Signature | Example | Example result | Description |
|
or
|
cs.hex.encode('hello') |
"68656c6c6f" |
Encodes a string or bytes to its hex representation |
cs.hex.decode(<string>) |
cs.hex.decode('68656c6c6f') |
"hello" |
Decodes a hex-encoded string |
The table is sorted logically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.ip.valid(<string>) |
cs.ip.valid('4.4.4.4') |
true |
Returns true if an IPv4 or IPv6 address |
cs.ip.isV4(<string>) |
cs.ip.isV4('8.8.8.8') |
true |
Returns true if an IPv4 address |
cs.ip.isV6(<string>) |
cs.ip.isV6('2001:0db8:85a3:0000:0000:8a2e:0370:7334') |
true |
Returns true if an IPv6 address |
cs.ip.isLoopback(<string>) |
cs.ip.isLoopback('::1') && cs.ip.isLoopback('127.0.0.0') |
true |
Returns true if an loopback IP address |
cs.ip.isPrivate(<string>) |
cs.ip.isPrivate('fc00::') && cs.ip.isPrivate('10.255.0.0') |
true |
Returns true if a private IP address |
cs.ip.inCIDR(<string1>, <string2>) |
cs.ip.inCIDR('10.0.0.0', '10.0.0.0/8') |
true |
Returns true if the IP address is in the provided CIDR |
Example use case:
Check whether a string returned from an action is formatted as an IPv4 or IPv6 address before using it as an input for another action that requires a string formatted as an IPv4 or an IPv6 address
The table is sorted logically.
| Signature | Example | Description |
|---|---|---|
cs.json.valid(<string>) |
cs.json.valid('{"hello": "world"}') |
Returns a boolean indicating whether the string is valid JSON |
cs.json.encode(<dyn>) |
cs.json.encode(data) |
Encodes a JSON object into a string |
cs.json.pretty(<dyn>) |
cs.json.pretty(data) |
Encodes a JSON object into a pretty-printed string--that is, indented and with newlines |
cs.json.decode(<string>) |
cs.json.decode('{"hello": "world"}') |
Returns a JSON object decoded, or parsed, from a string |
cs.json.escape(<string>) |
cs.json.escape('hello "world"') |
Escapes a string to make it safe as a JSON field value
Note: If you have an Array/List or a Map/Object that you want to use as a string value, you must first
cs.json.encode() it to get a string, and then cs.json.escape() it to make it safe as a value.
|
The table is sorted alphabetically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.list.chunk(<list>, <size>) |
cs.list.chunk(["hi", "ho", "he", "hu"], 2) |
[["hi", "ho"], ["he", "hu"]] |
Chunks a list into sub-lists of the specified size |
cs.list.shuffle(<list>) |
cs.list.shuffle([1, 2, 3]) |
[3, 1, 2] |
Returns a shuffled version of the provided list; the original list remains unchanged |
The table is sorted logically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.map.merge([<map1>, <map2>, ..., <mapN>]) |
cs.map.merge([{"hi": "ho"}, {"he": "be"}]) |
{"hi": "ho", "he": "be"} |
Merges 2 or more maps into a single map. The merge is not recursive. |
cs.map.mergeDeep([<map1>, <map2>, ..., <mapN>]) |
cs.map.mergeDeep([{"a": 1, "b": {"c": 2, "d": 3}}, {"e": 4, "b": {"c": 5, "f": 6}}]) |
{"a": 1, "b": {"c": 5, "d": 3, "f": 6}, "e": 4} |
Deep merges 2 or more maps into a single map. The merge is recursive. |
cs.map.set(<map>, <key>, <value>) |
cs.map.set({"hi": "ho"}, "a", "1") |
{"hi": "ho", "a": "1"} |
Returns a new map with the key inserted with the value. The original map remains unchanged. |
cs.map.remove(<map>, <key>) |
cs.map.remove({"a": {"b": 1, "c": 2}}, "a.b") |
{"a": {"c": 2}} |
Returns a new map with the key removed. The original map remains unchanged. |
The table is sorted alphabetically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.math.acos(<val>) |
cs.math.acos(-1) |
3.14159... |
Returns the arc cosine of the provided value |
cs.math.asin(<val>) |
cs.math.asin(0.5) |
0.52360... |
Returns the arc sine of the provided value |
cs.math.average(<list>) |
cs.math.average([1, 2, 3]) |
2 |
Returns the average of all the elements in the list |
cs.math.cos(<val>) |
cs.math.cos(0.5) |
0.87758... |
Returns the cosine of the provided value |
cs.math.log10(<val>) |
cs.math.log10(9.1) |
0.95904... |
Returns the base-10 logarithm of the provided value |
cs.math.log2(<val>) |
cs.math.log2(9.1) |
3.18586... |
Returns the base-2 logarithm of the provided value |
cs.math.median(<list>) |
cs.math.median([1, 6, 17]) |
6 |
Returns the median of all the elements in the list |
cs.math.pow(<base>, <exp>) |
cs.math.pow(2, 3) |
8 |
Returns the base raised the power of the exponent |
cs.math.random(<x>, <y>) |
cs.math.random(5, 10) |
7 |
Returns a random number between x (inclusive) and y (exclusive) |
cs.math.sin(<val>) |
cs.math.sin(0.5) |
0.47943... |
Returns the sine of the provided value |
cs.math.sqrt(<val>) |
cs.math.sqrt(9) |
3 |
Returns the square root of the provided value |
cs.math.sum(<list>) |
cs.math.sum([1, 2, 3]) |
6 |
Returns the sum of all the elements in the list |
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.misc.zscalerObfuscateAPIKey(<string>, <timestamp>) |
cs.misc.zscalerObfuscateAPIKey('YourAPIKeyGoesHere', cs.timestamp.now()) |
'YeuYYYuIruuu' |
Obfuscates the Zscaler API key according to the Zscaler specification. For more info, see https://help.zscaler.com/zia/getting-started-zia-api#CreateSession. |
The table is sorted logically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.net.urlEncode(<string>) |
cs.net.urlEncode('[email protected] more') |
"a%40crowdstrike.com+more" |
URL encodes a string |
cs.net.urlDecode(<string>) |
cs.net.urlDecode('a%40crowdstrike.com+more') |
"[email protected] more" |
URL decodes a string |
cs.net.parseURL(<string>) |
cs.net.parseURL('https://www.crowdstrike.com/search?id=foo&id=bar&baz=bip#123') |
{
"scheme": "https",
"host": "www.crowdstrike.com",
"domain": "crowdstrike.com",
"path": "/search",
"port": 80,
"query": {
"id": ["foo", "bar"],
"baz": ["bip"]
},
"fragment": "123",
"tld": "com"
}
|
Returns the URL parsed |
cs.net.isURL(<string>) |
cs.net.isURL('http://crowdstrike.com') |
true |
Returns true if a URL
Note: Does not validate that the URL is reachable or resolvable
|
cs.net.isEmail(<string>) |
cs.net.isEmail('[email protected]') |
true |
Returns true if formatted as an email address |
cs.net.htmlRemove(<string>) |
cs.net.htmlRemove('<b>hi</b>') |
"hi" |
Returns the string with all HTML tags removed |
cs.net.htmlEncode(<string>) |
cs.net.htmlEncode('<b>crowdstrike</b><p>yes</p>') |
"&lt;b&gt;crowdstrike&lt;/b&gt;&lt;p&gt;yes&lt;/p&gt;" |
HTML encodes a string |
cs.net.htmlDecode(<string>) |
cs.net.htmlDecode('&lt;b&gt;crowdstrike&lt;/
b&gt;&lt;p&gt;yes&lt;/p&gt;') |
"<b>crowdstrike</b><p>yes</p>" |
HTML decodes a string |
Example use case:
Parse a URL returned from an API call and check whether the domain is an internal domain
The table is sorted logically.
| Signature | Example | Example result | Description |
|---|---|---|---|
cs.string.repeat(<string>, <count>) |
cs.string.repeat('xyz', 2) |
"xyzxyz" |
Repeats a string If |
cs.string.camelCase(<string>) |
cs.string.camelCase('Hello World') |
"helloWorld" |
Returns the string camel-cased. Removes non-ASCII characters, whitespace, dots, hyphens, and underscores. |
cs.string.capitalize(<string>) |
cs.string.capitalize('hello THERE') |
"Hello there" |
Capitalizes the first letter and lowercases the remaining letters. |
cs.string.snakeCase(<string>) |
cs.string.snakeCase('Hello World') |
"hello_world" |
Returns the string snake-cased. Replaces non-ASCII characters, whitespace, dots, and hyphens with an underscore. |
cs.string.titleize(<string>) |
cs.string.titleize('hello world') |
"Hello World" |
Capitalizes the first letter of each word and lowercases all the other letters. |
cs.string.truncate(<string>, <length>) |
cs.string.truncate('hello world', 5) |
"he..." |
Truncates a string to the specified length. If the string did not exceed the specified length, it is returned as is. Otherwise, the ellipses is placed where the truncation occurred. If the returned string includes the ellipses, the 3 characters in the ellipses are then included in the string length. |
cs.string.ltrim(<string>) |
cs.string.ltrim(' hello ') |
"hello " |
Trims the left-hand side of the string by removing all whitespace. |
cs.string.rtrim(<string>) |
cs.string.rtrim(' hello ') |
" hello" |
Trims the right-hand side of the string by removing all whitespace. |
cs.string.ljust(<string>, <width>)
or cs.string.ljust(<string>, <width>, <char>) |
cs.string.ljust('hello', 10)orcs.string.ljust('hello', 10, ".") |
"hello "or"hello....." |
Returns the string left-justified with the specified width. If the padding character is omitted, the default is a space. |
cs.string.rjust(<string>, <width>)
or cs.string.rjust(<string>, <width>, <char>) |
cs.string.rjust('hello', 10)orcs.string.rjust('hello', 10, '.') |
" hello"or".....hello" |
Returns the string right-justified with the specified width.If the padding character is omitted, the default is a space. |
cs.string.transliterate(<string>) |
cs.string.transliterate('héllô') |
"hello" |
Returns the string with Unicode characters converted to the ASCII equivalents. |
cs.string.find(<string>, <regex>) |
cs.string.find('hello 123', '[0-9]+') |
"123" |
Returns the first substring that matches the provided regular expression. |
|
|
or
|
["123", "234"] |
Returns all substrings matching the provided regular expression, with an optional limit. |
cs.string.replaceRegex(<string>, <regex>, <replacement>) |
cs.string.replaceRegex('hello 123 234', '[0-9]+', 'wow') |
"hello wow wow" |
Replaces all substrings matching the provided regular expression, with a string literal replacement. |
cs.string.levenshtein(<string1>, <string2>) |
cs.string.levenshtein('abc', 'abd') |
1 |
Returns the Levenshtein distance between the two strings. |
The table is sorted alphabetically.
| Signature | Examples | Example result | Description |
|---|---|---|---|
cs.table.ascii(<data>) |
or
|
|
Returns an ASCII table as a string from the provided data.
The
<data> argument can be one of these formats:
|
|
or
|
or
or
|
The first example shows the result for the first 2 example calls.
Note: For brevity, this examples uses
snipped in place of actual styling.
This example shows the result for the example call that has the |
Returns an HTML table as a string from the provided data. The
The The
Use
Note: When you use the Send email action with the output type set to
HTML, use "Pre" with this extension to render the table correctly for the person viewing the email.
|
cs.table.markdown(<data>) |
or
|
|
Returns a Markdown table as a string from the provided data.
The
<data> argument can be one of these formats:
|
The table is sorted logically.
| Signature | Example | Description |
|---|---|---|
cs.timestamp.now() |
cs.timestamp.now() |
Returns the current time in UTC as a timestamp object |
cs.timestamp.format(<timestamp>, <layout>, <timezone>)
where |
cs.timestamp.format(data['Trigger.LastUpdated'], 'RFC822')
or
or
which for a "Trigger.ObservedTime" value of "2025-07-05T04:32:31Z" results in "2025-07-05" |
Formats a timestamp object into a string using the specified layout or format, such as "RFC3339" or a custom format like "Mon Jan 02 15:04:05 -0700 2006" Layout corresponds to definitions at https://pkg.go.dev/time#Layout. Timezone is a location name that corresponds to an entry in the IANA Time Zone database, such as "America/New_York" |
cs.timestamp.parse(<string>, <layout>, <timezone>)
where |
cs.timestamp.parse('2025-01-02 10:01:22', 'RFC3339')
or
|
Returns a timestamp object by parsing a string according to the specified layout or format, such as "RFC3339" or a custom format like "Mon Jan 02 15:04:05 -0700 2006" Layout corresponds to definitions at https://pkg.go.dev/time#Layout. Timezone is a location name that corresponds to an entry in the IANA Time Zone database, such as "America/New_York" |
The table is sorted alphabetically.
| Signature | Example | Description |
|---|---|---|
cs.uuid.canonical(<string>) |
cs.uuid.canonical('EFF9770113E640FF890B6F313E7CCA6B') |
Returns the UUID in its canonical form--that is, lowercase with hyphens. The result of the example is eff97701-13e6-40ff-890b-6f313e7cca6b. |
cs.uuid.new() |
cs.uuid.new() |
Returns a UUID v4 as a string, such as c72db0e4-e165-414e-9dae-32009a6fa97b. |
cs.uuid.valid(<string>) |
cs.uuid.valid('c72db0e4-e165-414e-9dae-32009a6fa97b') |
Returns whether a string is a valid UUID.
Note: This returns true if it is a valid UUID, regardless of version, such as UUIDv4 vs. UUIDv7.
|
Example use case:
Check whether a string returned from an API call is formatted as a Crowdstrike sensor UUID
Review example playbooks in Fusion SOAR and adapt them to your needs.
Create a Jira ticket daily that lists hosts in RFM.
This workflow runs nightly at 11:45 PM pacific time, retrieves each host in RFM, creates a Jira ticket with host attributes, and sends an email notification. You must specify one or more host groups to query. You can also update the default schedule trigger options, such as time zone and how often the workflow runs.
Subscription: All subscriptions with Fusion SOAR support
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Create a Jira ticket nightly for hosts in reduced functionality mode (RFM) and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve any validation errors.
Optional. In the Create Jira issue action, configure these fields:
Account: Select the Jira account to create the new issue in.
Project: Select the appropriate Jira project.
Labels: Optional. Include relevant labels.
Priority: Optional. Assign a priority.
Issue type: Select the appropriate issue type.
Description: Update the default description, if desired.
Summary: Update the default summary, if desired.
Data to include: Optional. By default, the workflow includes host details. You can update the included data, if desired.
Optional. In the Send email action, configure these fields:
Subject: Optional. Edit the subject line of the notification email.
Message: Optional. Edit the message body of the notification email.
Recipients: Enter the email addresses of the notification recipients.
Data to include: Optional. By default, the workflow includes Jira issue details and host details. Click the field to select additional details.
Make additional updates to the workflow as necessary. For example, you can update the default schedule trigger options, such as time zone and how often the workflow runs.
Update the name and description.
Set Status to On.
Click Save and exit.
Create a ServiceNow ticket when new unmanaged assets are seen so analysts can install the Falcon sensor on them.
Create a ServiceNow ticket when new unmanaged assets are seen so analysts can install the Falcon sensor on them. When assets are seen through asset discovery scanning with high confidence, a ServiceNow ticket is created and the asset triage status is set to Sensor install recommended.
Subscriptions: Falcon Exposure Management or Falcon Discover
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Create a ServiceNow ticket and triage new unmanaged assets and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve any validation errors.
Optional. In the Create ServiceNow incident action, configure these fields:
From the Account list, select your ServiceNow account and complete the ServiceNow incident ticket fields that appear.
${Hostname} or ${Manufacturer}. For more info, see Workflow actions.
Action recommended: Optional. Change the recommended action to take when this kind of asset is discovered. By default, the action is Recommend sensor install.
User assigned: Enter the Falcon username of the user assigned to install the sensor.
Description: Optional. Enter a description of the asset to give context to the assigned user.
Entity type: Select Entity type.
Discover Asset ID: Select Asset ID.
Update the name and description.
Set Status to On.
Click Save and exit.
For more info, see Asset Management: Assets.
Send an email to analysts when unauthorized applications are installed so the analysts can remove them.
Send an email to analysts when unauthorized applications are installed so the analysts can remove them. When applications in the application groups you specify are installed on managed assets, an email is sent.
Subscription: Falcon Exposure Management or Falcon Discover
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Email notification on unauthorized application installation and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve the validation errors.
Optional. In the Send email action, configure these fields:
Subject: Optional. Edit the subject line of the notification email.
Message: Optional. Edit the message body of the notification email.
Recipients: Enter the email addresses of the notification recipients.
Data to include: Optional. By default, the workflow includes the asset ID, application, and application groups. Click the field to select additional details.
Update the name and description.
Set Status to On.
Click Save and exit.
For more about monitoring applications and creating application groups, see Asset Management: Applications.
Automate and standardize the triage, enrichment, and remediation of user-reported phishing emails.
Automate and standardize the triage, enrichment, and remediation of user-reported phishing emails by integrating Microsoft 365 with Falcon Fusion SOAR.
Falcon Administrator
Workflow Author
CrowdStrike email phishing plugin.Your browser redirects to a Microsoft login page.
Your browser redirects back to the CrowdStrike Store.
Fusion SOAR includes several email phishing response playbooks as editable templates for popular use cases.
| Name | Falcon subscription | Additional requirements | Notes |
| Send notification for emails reported as phishing | Falcon Next-Gen SIEM | CrowdStrike Store integrations:Email Phishing Connector built for Microsoft 365 | Use this simple playbook to validate that the reported phishing emails are successfully sent to Fusion SOAR. |
| Email Phishing Response Playbook | Falcon Next-Gen SIEM |
CrowdStrike Store integrations: Email Phishing Connector built for Microsoft 365 |
|
| Email Phishing Response Playbook with Identity Threat Protection actions | Falcon Next-Gen SIEM and Falcon Identity Threat Protection |
CrowdStrike Store integrations: Email Phishing Connector built for Microsoft 365 Proofpoint Email Protection SOAR Actions |
Automatically add users and hosts to the Falcon Identity Protection watchlist in response to identity-based incidents.
Automatically add users and hosts to the Falcon Identity Protection watchlist in response to identity-based incidents that use MITRE ATT&CK® Credential Access tactics.
When Falcon detects an identity-based incident that uses Credential Access tactics, the workflow performs these actions:
Finds all user logins on that host for the 4 hours before the incident
Adds those users to the Identity Protection watchlist
Finds all the hosts in your environment that the watchlisted users logged into for the 4 hours before the incident
Adds those hosts to the Identity Protection watchlist
Subscription: Falcon Identity Protection
To complete the playbook setup, follow these steps:
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Identity Protection watchlist update after Credential Access detection and click Next.
The playbook opens in the workflow canvas.
Click Next.
Click Save and exit.
Update the workflow name and description.
Set Status to On.
Click Save and exit.
Now, when Falcon detects an identity-based incident that uses Credential Access tactics, the workflow adds the relevant users and endpoints to the Identity Protection watchlist. For more info about identity-based incidents, see Identity-based Incidents, Detections, and Risks.
Automate Jira ticket creation for vulnerabilities, hosts, and remediations from the Falcon console.
Automate Jira ticket creation for vulnerabilities, hosts, and remediations from the Falcon console. Configure a workflow that generates a pre-filled Jira ticket when a user selects Create ticket in the Actions column of the Vulnerabilities page.
To complete the playbook setup, for each source type—vulnerabilities, hosts, and remediations—configure the following:
Assign the Jira account that generates the ticket
Customize the incident ticket fields
Assign the same Jira account to attach a vulnerabilities report to the incident ticket
Subscription: Falcon Exposure Management or Falcon Spotlight
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Jira ticket creation for vulnerability management and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve the validation errors, including:
From the Account list, select your Jira account.
The Jira ticket fields appear with the source type’s recommended parameters pre-filled.
${Trigger.Category.SpotlightUserAction.Title} and ${Trigger.Category.SpotlightUserAction.Description} placeholders correspond to the ticket name and description that users enter when they create a ticket.
In the Add Jira attachment action, select the same Jira account from the Create Jira issue action.
Optional. Configure other actions and their fields as needed.
Update the name and description.
Set Status to On.
Click Save and exit.
Users can now create Jira tickets in from the Vulnerabilities page. For more info, see Create tickets.
Automatically execute sandbox analysis for high-risk machine learning endpoint detections.
Automatically execute sandbox analysis for high-risk machine learning endpoint detections. This reduces your response time in isolating high-risk threats.
For an endpoint detection where the tactic is machine learning and the severity is high or severe, the workflow executes these actions:
If the sandbox quota is greater than 80%, a notification is emailed.
If the sandbox quota is less than or equal to 80%, the file or associated SHA256 hash is submitted to the sandbox for default analysis. If the sandbox report threat score is greater than or equal to 80, the host is contained and a comment is added to the detection.
Subscription: Falcon Intelligence
To configure the playbook, customize the notification that’s emailed when the sandbox quota is over 80%.
Click Create workflow.
In the Create workflow window, the available playbooks appear in the Workflow Playbooks section.
Search for and select Machine learning detection sandbox analysis and click Next.
The workflow preview opens.
Click Customize playbook.
Resolve the validation errors.
Optional. In the Send email action, configure the notification fields:
Subject: Optional. Edit the subject line of the notification email.
Message: Optional. Edit the message body of the notification email.
Recipients: Enter the email addresses of the notification recipients.
Data to include: Optional. By default, the workflow includes sandbox usage details. Click the field to select additional details.
Update the name and description.
Set Status to On.
Click Save and exit.
Now, high-risk machine learning detections trigger sandbox analysis. For more info about sandbox analysis, see Getting Falcon Sandbox analysis.
Automate responses to Falcon Overwatch detections to minimize threat exposure time.
Automate responses to Falcon OverWatch detections to minimize threat exposure time.
When an endpoint detection is flagged by Falcon OverWatch, the workflow executes these actions:
Sets the endpoint detection status to In progress.
Contains the device where the detection occurred.
Adds corresponding user to the Falcon Identity Protection watchlist.
Adds corresponding endpoint to the Falcon Identity Protection watchlist.
Sends an email notification.
When an identity detection is flagged by Falcon OverWatch, the workflow executes these actions:
Adds the corresponding user to the Falcon Identity Protection watchlist.
Adds the corresponding source and destination endpoints to the Falcon Identity Protection watchlist.
Sends an email notification.
When an OverWatch generic detection is flagged by Falcon OverWatch, the workflow executes this action:
Sends an email notification.
Subscription: Falcon OverWatch and Falcon Identity Protection
To configure the playbook, customize the email notification.
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Falcon OverWatch Detection Notification and Remediation and click Next.
The playbook opens in the workflow canvas.
Click Next.
Resolve any validation errors.
In the Send email action, configure the notification fields:
Subject: Optional. Edit the subject line of the notification email.
Message: Optional. Edit the message body of the notification email.
Recipients: Enter the email addresses of the notification recipients.
OverWatch endpoint detections:
Severity (Display Name)
Description
Sensor host tags
Sensor hostname
Command Line
Image file name
URL
OverWatch identity detections:
Severity (Display Name)
Description
Start time
User name
User domain
User upn
Source endpoint name
Falcon link
OverWatch generic detections:
Severity (Display Name)
Description
Note from OverWatch Analyst
Category
Severity
Name
Detection ID
Click a field to select additional details.
Update the name and description.
Set Status to On.
Click Save and exit.
When OverWatch flags a detection, the workflow sends an email notification and performs containment and watchlist responses. The recipients can review the detection and take additional actions on the user, endpoint, and device. For more info, see Falcon OverWatch.
Consider the following additions to make your workflow even more effective, depending on your environment.
If your organization uses PagerDuty, add a Create a PagerDuty incident action to allow additional notification channels.
Ensure identity protection policies are configured. For more info, see Identity Protection Policy.
Review additional remediation responses related to identity protection. For more info, see Identity Protection in Fusion SOAR.
For more workflow playbooks, go to Fusion SOAR > Fusion SOAR > Workflows . Click Create workflow and view the list under Workflow Playbooks.
Send an email notification after the diagnostic tool CSWinDiag runs.
For Windows hosts, automatically send an email notification after the diagnostic tool CSWinDiag runs. Recipients can troubleshoot sensor-related issues by reviewing telemetry and diagnostics for that host.
CSWinDiag generates logs with this naming format: CSWinDiag_<hostname>_<unique_file_ID>.zip. If a file with CSWinDiag
in the filename is created or changed, the workflow uploads a copy of
the file to the Falcon cloud and sends an email notification.
Subscription: Falcon Insight XDR, Falcon Identity Protection, Falcon Prevent, and Falcon FileVantage
To configure the playbook, customize the email notification.
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Sensor diagnostic file collection and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve any validation errors.
Optional. In the Send email action, configure the notification fields:
Subject: Optional. Edit the subject line of the notification email.
Message: Optional. Edit the message body of the notification email.
Recipients: Enter the email addresses of the notification recipients.
Data to include: Optional. By default, the workflow includes the file name, file path, rule group name, and policy. Click the field to select additional details.
Update the name and description.
Set Status to On.
Click Save and exit.
Now, when CSWinDiag runs on any Windows host, the workflow sends a notification to the email recipients. For info about CSWinDiag log files, see Real Time response commands: cswindiag.
Automate ServiceNow incident ticket creation for vulnerabilities, hosts, and remediations from the Falcon console.
Automate ServiceNow incident ticket creation for vulnerabilities, hosts, and remediations from the Falcon console. Configure a workflow that generates a pre-filled ServiceNow ticket when a user selects Create ticket in the Actions column of the Vulnerabilities page.
To complete the playbook setup, for each source type—vulnerabilities, hosts, and remediations—configure the following:
Assign the ServiceNow account that generates the ticket
Customize the incident ticket fields
Assign the same ServiceNow account to attach a vulnerabilities report to the incident ticket
Subscription: Falcon Exposure Management or Falcon Spotlight
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select ServiceNow incident creation for vulnerability management and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve any validation errors.
Optional. In the Create ServiceNow incident action for the Source type is equal to Vulnerability condition:
From the Account list, select your ServiceNow account.
The ServiceNow incident ticket fields appear with the source type’s recommended parameters pre-filled.
${Trigger.Category.SpotlightUserAction.Title} and ${Trigger.Category.SpotlightUserAction.Description} placeholders correspond to the ticket name and description that users enter when they create a ticket.
Optional. Edit the ticket fields as needed.
Optional. Follow step 6 for the Source type is equal to Host and Source type is equal to Remediation conditions.
Update the name and description.
Set Status to On.
Click Save and exit.
Users can now create ServiceNow tickets in from the Vulnerabilities page. For more info, see Create tickets.
Automate ServiceNow incident ticket creation with attached details for high-severity detections.
Automate ServiceNow incident ticket creation with attached details for high-severity detections.
After an endpoint detection with a severity that’s greater than or equal to High, the workflow performs these actions:
Creates a ServiceNow incident ticket
Attaches a log of executed processes that occurred within an hour of the detection to the ticket
Attaches the command line history from the hour before the detection to the ticket
To complete the playbook setup, configure the following:
Assign the ServiceNow account that generates the ticket
Customize the incident ticket fields
Assign the same ServiceNow account to attach the executed processes and command line history to the incident ticket
Subscription: Falcon Insight XDR
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select ServiceNow ticket creation for high severity detections and click Next.
The playbook opens in the workflow canvas.
Click Next.
Optional. In the Create ServiceNow incident action, configure these fields:
From the Account list, select your ServiceNow account.
The ServiceNow incident ticket fields appear with the recommended detection parameters pre-filled.
Optional. Edit the ticket fields as needed.
In the Create ServiceNow attachment actions, from the Account list, select the same ServiceNow account from the previous step and click Next.
Update the name and description.
Set Status to On.
Click Save and exit.
Now, when a detection with a severity that’s greater than or equal to High occurs, the workflow generates a ServiceNow ticket with relevant details attached. For more info about the ServiceNow integration, see Create ServiceNow incidents with host details.
Receive notifications about workflow failures so you can quickly identify workflow errors.
Receive notifications about workflow failures so you can quickly identify workflow errors.
When a workflow execution has the execution status of Failed, the playbook workflow sends a detailed email.
Subscription: All subscriptions with Fusion SOAR support
To configure the playbook, customize the email notification.
Click Create workflow.
In the Create workflow window, in the Workflow Playbooks section, select Workflow execution failure notification and click Next.
The playbook opens in the workflow canvas.
Click Customize playbook.
Resolve any validation errors.
In the Send email action, configure the notification fields:
Subject: Optional. Edit the subject line of the notification email.
Message: Optional. Edit the message body of the notification email.
Recipients: Enter the email addresses of the notification recipients.
Data to include: Optional. By default, the workflow includes the workflow name, description, and execution time. Click the field to select additional details.
Update the name and description.
Set Status to On.
Click Save and exit.
Now, when any workflow execution fails, the playbook workflow sends a notification to the email recipients. For more info, see Monitor executions.