Skip to main content

Fusion Flow Bricks

PurposeExplore this guide to discover the various types of Fusion Flow Bricks, along with examples of popular ones used in the Flows App. Learn how to set them up with easy-to-follow instructions, giving you an understanding of how they work and where they can be useful.
Last UpdatedMay 15, 2024

Source Bricks

Serving as an endpoint for data retrieval, this Brick can be used in multiple Flows to fetch information from different sources. This endpoint can be a URL, a database, or any other data source that can be accessed through an API or some other data retrieval mechanism.

        There are two ways to retrieve data from a Source-Brick:

  • Push: you can send data to a Source Brick using its block-id This is useful when you have data that you want to send to the source for processing or storage.

  • Pull: the Source Brick can pull data from the endpoint by itself. This can be useful when you want to periodically fetch data from the endpoint at specific intervals.

        To create a Source Brick, you need to specify the endpoint from where you want to fetch data. The endpoint can be a URL, a database, or any other data source that can be accessed through an API or some other data retrieval mechanism.

To build a Flow the user starts by adding a Source Brick, which can be created and configured at the "Sources" menu.

This Bricks can only be used to send data.

Examples of Source Bricks

Http-endpoint

Within Raven, the HTTPS endpoint facilitates data import, comprehending various data formats such as JSON and XML. If you tell it what kind of data you are sending by saying 'JSON' or 'XML' in a special way, it will make sure that data is written correctly.

  • Example config of cURL with JSON validation:

     curl --header "Content-Type: application/json" \
    --request POST \
    --data '{"datafield1":"xyz","datafield2":"xyz"}' \
    https://{SOURCE_BRICK_ID}.ingest.dtact.com

    Upload large multi-line JSON:

    curl -H "Content-Type: application/json" -T ./results.json https://{SOURCE_BRIC_ID}.ingest.dtact.com

    Example multi-line JSON:

    cat ~/Downloads/exampleFile.json|jq '.Records[2000:2500][]'| \
    curl -X POST -H "Content-Type: application/json" \
    --data-binary @- "{SOURCE_BRICK_ID}.ingest.dtact.com"

    Example cURL with a XML-file:

    curl -vv -X POST 
    -H 'Content-Type: application/xml'
    --data @/tmp/nmap_example.xml
    -k https://{SOURCE_BRICK_ID}.ingest.dtact.com:443/

    Example config of cURL with XML validation:

     curl --header "Content-Type: application/xml" \
    --request POST \
    --data '
    <?xml version="1.0" encoding="UTF-8"?>
    <note>
    <to>Brian</to>
    <from>Jani</from>
    <heading>Reminder</heading>
    <body>Hello World!</body>
    </note> ' \
    https://{SOURCE_BRICK_ID}.ingest.dtact.com

    Sending files to the HTTPS endpoint:

    curl -T file.bin https://{SOURCE_BRICK_ID}.ingest.dtact.com

Filebeat-endpoint

Filebeat is a lightweight tool that helps you keep an eye on log files, collect them, and send them to their destination.

When you have Filebeat installed as an agent on your servers, you can tell it where to find log files you want to send. Filebeat will carefully watch these locations for new log data and then efficiently send it to Raven.

What's more, Filebeat is super flexible. It already knows how to handle many common log formats, like those used by Apache, MySQL, NGINX, and more. If you want to get even fancier, you can set up your own custom paths.

For more information on these modules follow link below:

Learn more about official Filebeat modules.

For more information on setting up your own custom paths follow link below:

Learn more about custom paths configuration.

For more information on dowloading and installing Filebeat follow link below:

Learn more about Filebeat OSS download.

Learn more about Filebeat Installation.

Configuration:

As Raven is compatible with Logstash for data ingestion, you have the option to transmit data from Filebeat to Raven by configuring Filebeat to utilize the integrated Logstash output.

Example on how to configure the transmission of data from Filebeat to Raven:

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log

#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s

#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
output.logstash:
hosts: ["{SOURCE_BRICK_ID}.ingest.dtact.com:5044"]
ssl.enabled: true

S3-source

Is an object storage on AWS, this source Brick takes the data from the storage and pulls it into the Flow.

To configure this Brick some information about the particular S3 will have to be filled.

Python-source-ng

This Brick establishes a connection between a data source and the Python language. In essence, it enables you to retrieve data from a source by writing Python code while also allowing you to specify the parameters for data extraction.

Transform Bricks

These Bricks are used to modify, enrich and transform data that is passed between actions. Some of the most usual transformation are:

  • Value Conversion: This involves converting values, such as timestamps, to meet specific requirements.

  • Structural Modification: Transformations can change the arrangement of data, for instance, by flattening a nested structure.

  • Value Mapping: This transformation maps values to other values, for example, mapping numeric values like 1, 2, 3 to corresponding labels like "LOW," 4, 5, 6 to "MEDIUM," and so on.

  • Data Parsing: Parsing operations can be applied to extract relevant information from data, especially in scenarios like parsing a log line.

  • Filtering: Specific data elements can be filtered based on predefined criteria.

  • Data Enrichment: Transformations can enrich data by incorporating information from external sources. (Note: This may not be fully compatible with VRL.)

This Bricks can receive and send data.

Examples of Transform Bricks:

Python-transform-ng

Utilizing the Python language, this Brick transforms data.

Vector Remap Language (VRL)

This Brick uses VRL language to transform data.

Execute Bricks

Executions encompass a range of tasks, including but not limited to sending emails, and executing API calls. Each action is tailored with specific inputs and outputs, providing a level of customization to align with the unique requirements of the Flow. Essentially, actions serve as the functional components that collaboratively contribute to the Flow's overall workflow.

This Bricks can only receive data.

One really important and innovative feature that raven provides is "Scripting Bricks". This means that the user can use code to configure their own bricks.

Scripting Bricks are used in Python or TypeScript, to port existing scripts inside your organization easily into Raven. Your scripts will run directly inside of a flow, integrating the code you already have inside of a high-performance workflow. Scripts can be be used directly inside Raven, or via link to an external script.

Scripting Bricks support two languages:

For Scripting Bricks the Raven portal utilizes serialization.

When a Brick sends a message, it includes some data called payload. Think of this payload as a bundle of information, usually organized like a dictionary. To send this data over communication channels, like a topic, we need to convert it into a format that can be easily stored. Most of the time, we use a format called JSON to do this. It's like packaging the data in a way that others can understand.

So, when you have a standard dictionary and you want to use JSON, you can use a built-in method that's kind of like this:

Learn more about Serialization

Learn more about Deserialization

Examples of Execute Bricks:

s3-action

The AWS S3 connector Brick functions as an output connector, dispatching consumed messages to an AWS S3 bucket

To use the AWS S3 Brick you need a AWS account with access and roles for the S3 bucket. In the Brick Inspector's configuration fill in the parameters shown in the table below.

descriptionrequired
AWS S3 BucketAWS S3 bucket to store messages inyes
AWS S3 EndpointThe AQWS S3 Endpoint for reaching the bucketyes
AWS Access key IDAccess key ID with roles to access and store into the bucketyes
AWS Secret access keySecret access key belonging to AWS Access Key IDyes
AWS RegionThe region of the S3 bucketyes
Max file sizeMaximum allowed file sizeno
Write intervalTime interval after which the file is written, default is 15mno
Log levelDebugging log levelno

Python-action-ng

Brick for scripting an action with the Python language.

s3-extract

This brick is a input connector brick for retrieving data from an AWS S3 bucket.

To use the AWS S3 brick you need a AWS account with access and roles for the S3 bucket. In the brick inspector's configuration fill in the parameters shown in the table below.

To retrieve data within a specific time-duration range you can use the since and until parameters from the brick.

descriptionrequired
AWS S3 BucketAWS S3 bucket to store messages inyes
AWS S3 EndpointThe AQWS S3 Endpoint for reaching the bucketyes
AWS Access key IDAccess key ID with roles to access and store into the bucketyes
AWS Secret access keySecret access key belonging to AWS Access Key IDyes
AWS RegionThe region of the S3 bucketyes
SinceFetch data since last modifiedno
UntilFetch data until last modifiedno
Log levelDebugging log levelno