Loading data into Looker Studio (passive)#

This guide explains how to load data into Looker Studio (passive) for storage and further processing.

Introduction#

Looker Studio (passive) is a deprecated destination. Adverity recommends loading data into Looker Studio using the Looker Studio destination.

Note

Destination Name is a |passive destination|.

When you assign Destination Name to a datastream as a destination, Destination Name periodically pulls all data extracts with Collected or Loaded status from Adverity. When you collect data from the datastream, it is not actively pushed to Destination Name.

Prerequisites#

Before you complete the procedure in this guide, perform all of the following actions:

  • Create a datastream whose data you want to load into Destination Name. For more information on creating a datastream, see Collecting data in Adverity.

  • Apply the correct Data Mapping to your datastream.

  • Set up instance profiles for access to S3 buckets from Destination Name clusters. For more information, see the Databricks documentation.

  • Obtain Access Key ID and Secret Access Key. Use an AWS policy file as you would for an AWS S3 destination.

  • In the Destination Name UI, enable the toggle in your CRM settings to import contacts, companies, and deals. For more information, see the Destination Name documentation.

  • Ensure the Facebook Ads account used to authorize the Destination Name|destination| is one of the following user types:

    • A business manager admin.

    • The system admin user who created the offline event set.

    • An admin on the account connected to the offline event set.

  • Ensure the Facebook Ads account used to authorize the Destination Name|destination| is one of the following user types:

    • A business manager admin.

    • The system admin user who created the audience list.

    • An admin on the account connected to the audience list.

  • Ensure you have login details to the destination with the following permissions:

    • Read, write, and delete files and folders.

    • List folders.

  • Create a database user in the Destination Name user interface. For more information, see the Microsoft documentation.

  • If this is the first time you are loading data into Destination Name, you will need to create a login profile with the username and password that you will use when adding Destination Name as a destination. You will also need to grant this login profile the required permissions. To do this, see the Advanced Microsoft SQL Server tips.

  • (Recommended) For faster data processing with the bulk insert function, c Create a database master key in Destination Name. For more information, see the Microsoft documentation.

  • (Recommended) For faster data processing with the bulk insert function, s Set up Azure Blob storage for your workspace. For more information, see Setting up Storage for data extracts.

  • Add Adverity’s IP address to the whitelist of Destination Name. For more information, see the Microsoft documentation. To determine the IP address of your Adverity instance, contact Adverity Customer Support.

  • Create a dataset in Destination Name that is dedicated to data loaded in from Adverity.

  • Ensure that the Google account you use to connect to Destination Name includes the following permissions:

    • bigquery.jobs.get

    • bigquery.jobs.list

    • bigquery.jobs.create

    • bigquery.tables.get

    • bigquery.tables.getData

    • bigquery.tables.list

    • bigquery.datasets.get

    • bigquery.tables.create

    • bigquery.tables.update

    • bigquery.tables.updateData

    Alternatively, if your account does not include all of the permissions above, connect to Destination Name with a JSON service account key. For more information on creating a JSON service account key, see the Google documentation.

If you load large data sets into Destination Name (for example, larger than 50 GB), load the data as a batch operation. For more information on batch loading data to Destination Name, see the Google documentation. To use the batch load function, perform all of the following actions in addition to the prerequisites listed above:

  • Ensure that the Google account you use to connect to Destination Name includes the following permissions:

    • storage.objects.get

    • storage.objects.list

  • Ensure that the Google account you use to connect to Destination Name has permissions to write and delete files in your Google Cloud Storage.

  • Ensure that the account you use for Destination Name and Google Cloud Storage has access to both Destination Name and Google Cloud Storage. We recommend ensuring that Destination Name and Google Cloud Storage are in the same project. For more information on projects, see the Google documentation.

  • Set up an authorization to Google Cloud Storage in your Adverity workspace. For more information, see Setting up an authorization to Google Cloud Storage (Service Account).

  • Set up a storage in Adverity using the Google Cloud Storage authorization. For more information, see Setting up Storage for data extracts. You do not need to set up this storage to store the data extracts in your workspace.

  • To create tables in the Destination Name|destination|, ensure the account you use to connect to Destination Name has the role USAGE or a role with the same or higher privileges.

  • If you choose to use OAuth2 authorization, follow these steps. This is not necessary if you authorize Destination Name using your username and password or private key.

    • Destination Name automatically uses your default role to create the authorization.

      Ensure that the Destination Name role that you want to use to connect to Destination Name is set as your default role before you create an OAuth2 authorization.

      Note

      When working with Destination Name roles, make sure that the role is all in uppercase, for example USER_ROLE. If a role is not in uppercase, this can cause errors.

    • Create Destination Name security integration to get the Client ID and Client secret values with the following expression. This requires a role with ACCOUNTADMIN or higher privileges.

      CREATE SECURITY INTEGRATION {YOUR_INTEGRATION_NAME}
      TYPE = OAUTH
      ENABLED = TRUE
      OAUTH_CLIENT = CUSTOM
      OAUTH_CLIENT_TYPE = 'CONFIDENTIAL'
      OAUTH_REDIRECT_URI = 'https://oap.datatap.io/oauth2/callback/'
      OAUTH_ISSUE_REFRESH_TOKENS = TRUE
      BLOCKED_ROLES_LIST = ('SYSADMIN')
      ;
      

      For more information on using OAuth2 with Destination Name, see the Snowflake documentation.

  • Ensure that the account you use to connect to Destination Name is an Adobe Analytics Service Account. For more information, see the Adobe Analytics documentation.

  • In the data extract, ensure the following columns are present.

    • Ensure one of the following three User Identifier fields is included:

      • marketingCloudVisitorID

      • IPAddress

      • customerID.[customerIDType].id

    • Ensure one of the following four Page Identifier fields is included:

      • pageURL

      • pageName

      • pe

      • queryString

    • Ensure the following fields are included in the data extract:

      • timestamp

      • userAgent

      • customerID.[customerIDType].isMCSeed (this is only required if ustomerID.[customerIDType].id is selected above in the User Identifier field).

  • (Optional) If events data is to be loaded into Adobe Analytics, the data must be in a specific format. Create a transformation script that puts the events data in the correct format. For more information see Transformation script for Adobe Analytics events.

  • Ensure the data extracts of the datastream contain at least one field that can be used to identify users or companies.

    If the data is about users, the identifier fields must be at least one of the following:

    • Email

    • Apple ID

    • Google Advertising ID

    • Google User ID

    If the data is about companies, the identifier fields must be at least one of the following:

    • Company name

    • Company domain

    • Company website

    • URL of the company’s LinkedIn page

    Adverity encrypts the identifier fields to ensure that your data is anonymized.

  • Ensure that the data extracts of the datastream from which you want to load data into Destination Name contain the following fields, and that they conform to the following standards:

Fields

Standards

Amount

  • There are no currency symbols.

  • There are no negative numbers.

  • The field is not left blank.

  • If the Amount field value you want to load into LinkedIn Offline Conversions is a number with decimals, the value has a maximum of two decimal places. For example, 2000.02 or 2,000.02.

  • The field’s value is not 0.

  • If you do not track the amount, populate this field with the value 1.

Currency

The currency is in a standard ISO format, for example CAD, USD or EUR.

EventType

  • The values are in capital letters, for example PURCHASE. You can use the convert or convertall to convert the values to capital letters.

  • The EventType field matches one of the selected conversion types in the destination settings.

Timestamp

The value is in epoch milliseconds. If your data source records timestamps in seconds, add “000” to the end of the seconds value to convert it to milliseconds.

  • If your data extracts contain any of the following fields, make sure that they conform to the following length requirements:

    • The First Name and Last Name fields must be max. 35 characters.

    • The Company and Title fields must be max. 50 characters.

    • The Country field must be an ISO standard two-letter code, e.g. DE for Germany.

  • Ensure that the data extracts of the datastream contain either the Email field, or both the First Name and Last Name fields.

    Adverity hashes the email identifier field to ensure that your data is anonymized. The first and last name identifiers are not hashed as they must be sent to Destination Name in the original form.

  • Create a Data Set in Destination Name. For more information, see the Google Analytics documentation.

  • Before you load data into Destination Name, make a note of the schema to configure the destination. For more information on getting the schema, see Finding the Google Analytics schema.

  • Ensure you can log in to Destination Name.

  • Ensure your SAP account has at least Admin privileges.

  • Determine Adverity’s IP address and whitelist it in Destination Name. To find out Adverity’s IP Address, contact Adverity support. For more information on whitelisting the IP address, see the SAP documentation.

  • Ensure the data extracts of the datastream contain a field that can be used to identify users. This field must be one of the following:

    • Email

    • Phone number

    • Mobile advertising ID

    If you use phone number, ensure the values include the international dialing code.

    Adverity encrypts the identifier fields to ensure that your data is anonymized.

  • Find the address of the Destination Name database into which you want to load data.

  • Set up an Amazon S3 bucket as storage for Destination Name. For more information, see Setting up Storage for data extracts.

  • Find the region code of your S3 bucket. The region code determines which database to enter when setting up an authorization to Destination Name. For more information on finding your S3 bucket region code, see the AWS documentation.

  • Ensure your AWS account has the required Amazon S3 permissions. For more information, see Configuring AWS policies.

  • To connect to Azure Blob with a Service Principal account, ensure the Service Principal account has the roles Reader and Storage Blob Data Contributor. For more information on creating a Service Principal account, see the Azure Blob documentation. For more information on assigning roles, see the Azure Blob documentation.

  • Ensure the account you use to connect to Destination Name is assigned to the Standard or Admin role.

  • In the Google Ads user interface, create a Linked Account between Adverity and Google. For more information on how to create this link, see the Google documentation. Once you have set up this link, you can then select the Customers to which you want to load data.

  • Before loading data into Destination Name, make sure the collected data in your data extracts is not hashed. Adverity automatically applies hashing to your data extracts before your data is loaded into Destination Name. If the collected data is already hashed, then applying additional hashing to the data extracts will result in an error when loading your data into Destination Name.

Limitations#

Loading data into Looker Studio comes with the following limitations:

Procedure#

To load data from a datastream into Looker Studio, follow these steps:

  1. Go to the Datastreams page.

  2. Open the chosen datastream by clicking on its name.

  3. In the top navigation panel, click Local Data Retention.

  4. In Extract Filenames , select Unique by day.

  5. In Key Date Column, select the date column in your datastream.

  6. In the top navigation panel, click Overview.

  7. In the Destinations section, click + Add Destination.

  8. Click Passive Destination.

  9. Click Open in Looker Studio.

  10. Log in to Looker Studio.

Troubleshooting#

Reducing the size of data extracts#

To reduce the size of data extracts, use many small fetches instead of one big fetch.

To achieve this, perform the following actions:

  • Use shorter time ranges in fetches.

  • Filter in your fetches for business entities such as agencies, advertisers, and accounts.

Loading data of multiple datastreams into Looker Studio#

To load the data of multiple datastreams into Looker Studio, follow these steps:

  1. Combine the |datastream|s into a |Bundle|.

  2. Follow the procedure explained above in this guide, using the Bundle as your datastream.