Loading data into Azure Blob

This guide explains how to load data into Azure Blob for further analysis.

Prerequisites

Before you complete the procedure in this guide, perform all of the following actions:

  • Create a datastream whose data you want to load into Azure Blob. For more information on creating a datastream, see Creating a datastream.

  • Ensure you have login details to the destination with the following permissions:

    • Read, write, and delete files and folders.

    • List folders.

  • To connect to Azure Blob with a Service Principal account, ensure the Service Principal account has the roles Reader and Storage Blob Data Contributor. For more information on creating a Service Principal account, see the Azure Blob documentation. For more information on assigning roles, see the Azure Blob documentation.

Procedure

To load data from a datastream into Azure Blob, follow these steps:

  1. Add Azure Blob as a destination to the workspace which contains the datastream or to one of its parent workspaces.

  2. Assign the Azure Blob destination to the datastream.

    You can assign as many destinations to a datastream as you want.

    Some destinations require specific Data Mapping, such as Hubspot and Facebook Offline Conversions. If these Data Mapping requirements conflict, the destinations cannot be assigned to the same datastream.

  3. Configure load settings.

Adding Azure Blob as a destination

To add Azure Blob as a destination to a workspace, follow these steps:

  1. Go to the Destinations page.

  2. Click + Create destination.

  3. Search for and click Azure Blob.

  1. Choose how to authorize Adverity to access Azure Blob:

    • To use your details, click Access Azure Blob using your credentials.

    • To ask someone else to use their details, click Access Azure Blob using someone else's credentials.

      If you choose this option, the person you ask to create the authorization will need to go through the following steps.

  2. Click Next.

  1. Select one of the following options:

    • To connect to Azure Blob with the account name and access key, click File Azure. For more information on managing access keys, see the Azure Blob documentation.

    • To connect to Azure Blob with an SAS account, click File Azure (SAS). For more information on creating an SAS account, see the Azure Blob documentation.

    • To connect to Azure Blob with a Service Principal account, follow these steps:

      1. Click File Azure (Service Principal).

      2. In Client id, enter the application ID. For more information on getting the application ID, see the Azure Blob documentation.

      3. In Client secret, enter the application secret. For more information on creating an application secret, see the Azure Blob documentation.

      4. In Tenant id, enter the tenant ID. For more information on getting the tenant ID, see the Azure Blob documentation.

      5. In Storage Account Name, enter the name of the storage account. Do not include the part blob.core.windows.net.

      6. Click Authorize.

  1. In the Configuration page, fill in the following fields:

    Name

    (Optional) Rename the destination.

    Destination URL

    In the drop-down on the left, select the file server type. In the text field in the middle, enter the base URL of the file server. In the text field on the right, enter the path to the folder into which you want to load data. Click Test to check the authorization.

    Output format

    Select the data format that Adverity uses to load data into the destination.

    When you load data in AVRO file format, select AVRO to use the null codec, or AVRO (deflate) to use the deflate codec. For more information on codecs, see the Apache documentation.

    For more information on advanced configuration settings, see Advanced File destination configuration.

  1. Click Create.

Assigning Azure Blob as a destination

To assign the Azure Blob destination to a datastream, follow these steps:

  1. Go to the Datastreams page.

  2. Open the chosen datastream by clicking on its name.

  1. In the Load section, click + Add destination.

  1. Select the Azure Blob checkbox in the list.

  2. Click Save.

  3. For the automatically enabled destinations, in the pop-up window, click Yes, load data if you want to automatically load your previously collected data into the new destination. The following data extracts will be loaded:

    • All data extracts with the status collected if no other destinations are enabled for the datastream

    • All data extracts with the status loaded if the data extracts have already been sent to Adverity Data Storage or external destinations

    Alternatively, click Skip to continue configuring the destination settings or re-load the data extracts manually. For more information, see Re-loading a data extract.

Configuring settings for loading data into Azure Blob

To configure the settings for loading data into Azure Blob, follow these steps:

  1. Go to the Datastreams page.

  2. Open the chosen datastream by clicking on its name.

  1. In the Load section, find the Azure Blob destination in the list, and click Actions on the right.

  2. Click Destination settings.

  1. Fill in the following fields:

    Filename

    Specify the target file in the destination into which to load data from the datastream. The name can contain alphanumeric characters and underscores. For example, target_file.

    To load data into sub-folders within the folder defined in the Destination URL field, specify a file path. For example, folder1/target_file.

    By default, Adverity saves data from each datastream in a different file named {datastream_type}_{datastream_id}_{scheduled_year}_{scheduled_month}_{scheduled_day}.

    If you specify the same target file for more than one datastream, the existing file will be overwritten with the new data.

    • To create a new Azure Blob spreadsheet containing the data you load into Azure Blob, enter a name for the new spreadsheet into this field.

    You can use the following placeholders when creating new file names in the destination:

    Placeholder

    Description

    {app_label}

    The data source's short name.

    {datastream_id}

    The datastream ID.

    {datastream_type}

    The data source.

    {extension}

    The file extension of the data extract.

    {extract_id}

    The data extract ID.

    {id}

    The datastream ID.

    {meta[*]}

    Replace * with a metadata placeholder to use metadata in the file name. For example, {meta[datastream_URI]} uses the datastream URI as the file name. For more information on metadata and placeholders, see Using placeholders.

    {name}

    The automatically generated filename of the data extract.

    {scheduled_day}

    The day when the data fetch was scheduled to run.

    {scheduled_month}

    The month when the data fetch was scheduled to run.

    {scheduled_year}

    The year when the data fetch was scheduled to run.

    {upload_day}

    The day when the data extract is loaded into the Azure Blob destination.

    {upload_hour}

    The hour when the data extract is loaded into the Azure Blob destination.

    {upload_minute}

    The minute when the data extract is loaded into the Azure Blob destination.

    {upload_month}

    The month when the data extract is loaded into the Azure Blob destination.

    {upload_second}

    The second when the data extract is loaded into the Azure Blob destination.

    {upload_year}

    The year when the data extract is loaded into the Azure Blob destination.

  2. Click Save.