ÃÛ¶¹ÊÓƵ

Configure access to Google BigQuery configure-fda-google-big-query

Use ÃÛ¶¹ÊÓƵ Campaign Classic Federated Data Access (FDA) option to process information stored in an external database. Follow the steps below to configure access to Google BigQuery.

  1. Configure Google BigQuery on Windows or Linux
  2. Configure the Google BigQuery external account in ÃÛ¶¹ÊÓƵ Campaign Classic
  3. Set up Google BigQuery connector bulk load on Windows or Linux
NOTE
Google BigQuery connector is available for hosted, hybrid and on-premise deployments. For more on this, refer to this page.

Google BigQuery on Windows google-windows

Driver set up on Windows driver-window

  1. Download the .

  2. Configure the ODBC driver in Windows. For more on this, refer to .

  3. For the Google BigQuery connector to work, ÃÛ¶¹ÊÓƵ Campaign Classic requires the following parameters to connect:

    • Project: create or use an existing project.

      For more information, refer to this .

    • Service account: create a service account.

      For more information, refer to this .

    • Key File Path: the Service account requires a Key File for a Google BigQuery connection through ODBC.

      For more information, refer to this .

    • Dataset: Dataset is optional for an ODBC connection. Since every query needs to provide the dataset where the table is located, specifying a Dataset is mandatory for Google BigQuery FDA Connector in ÃÛ¶¹ÊÓƵ Campaign Classic.

      For more information, refer to this .

  4. In ÃÛ¶¹ÊÓƵ Campaign Classic, you can then configure your Google BigQuery external account. For more on how to configure your external account, refer to this section.

Bulk load set up on Windows bulk-load-window

NOTE
You need Python installed for Google Cloud SDK to work.
We recommend using Python3, see this .

Bulk Load utility allows faster transfer, which is achieved through Google Cloud SDK.

  1. Download Windows 64-bit (x86_64) archive from this and extract it in the corresponding directory.

  2. Run the google-cloud-sdk\install.sh script. You need to accept the setting of path variable.

  3. After the installation, check that the path variable ...\google-cloud-sdk\bin is set. If not, add it manually.

  4. In the ..\google-cloud-sdk\bin\bq.cmd file, add the CLOUDSDK_PYTHON local variable, which will redirect to the location of the Python installation.

    For example:

  5. Restart ÃÛ¶¹ÊÓƵ Campaign Classic for the changes to be taken into account.

Google BigQuery on Linux google-linux

Driver set up on Linux driver-linux

Before setting up the driver, note that script and commands must be ran by root user. It is also recommended to use Google DNS 8.8.8.8, while running the script.

To configure Google BigQuery on Linux, follow the steps below:

  1. Before the ODBC installation, check that the following packages are installed on your Linux distribution:

    • For Red Hat/CentOS:

      code language-none
      yum update
      yum upgrade
      yum install -y grep sed tar wget perl curl
      
    • For Debian:

      code language-none
      apt-get update
      apt-get upgrade
      apt-get install -y grep sed tar wget perl curl
      
  2. Update system before installation:

    • For Red Hat/CentOS:

      code language-none
      # install unixODBC driver manager
      yum install -y unixODBC
      
    • For Debian:

      code language-none
      # install unixODBC driver manager
      apt-get install -y odbcinst1debian2 libodbc1 odbcinst unixodbc
      
  3. Before running the script, you can obtain more info by specifying --help argument:

    code language-none
    cd /usr/local/neolane/nl6/bin/fda-setup-scripts
    ./bigquery_odbc-setup.sh --help
    
  4. Access the directory where the script is located and run the following script as root user:

    code language-none
    cd /usr/local/neolane/nl6/bin/fda-setup-scripts
    ./bigquery_odbc-setup.sh
    

Bulk load set up on Linux bulk-load-linux

NOTE
You need Python installed for Google Cloud SDK to work.
We recommend using Python3, see this .

Bulk Load utility allows faster transfer, which is achieved through Google Cloud SDK.

  1. Before the ODBC installation, check that the following packages are installed on your Linux distribution:

    • For Red Hat/CentOS:

      code language-none
      yum update
      yum upgrade
      yum install -y python3
      
    • For Debian:

      code language-none
      apt-get update
      apt-get upgrade
      apt-get install -y python3
      
  2. Access the directory where the script is located and run the following script:

    code language-none
    cd /usr/local/neolane/nl6/bin/fda-setup-scripts
    ./bigquery_sdk-setup.sh
    

Google BigQuery external account google-external

You need to create a Google BigQuery external account to connect your ÃÛ¶¹ÊÓƵ Campaign Classic instance to your Google BigQuery external database.

  1. From ÃÛ¶¹ÊÓƵ Campaign Classic Explorer, click Administration ‘>’ Platform ‘>’ External accounts.

  2. Click New.

  3. Select External database as your external account’s Type.

  4. Configure the Google BigQuery external account, you must specify:

    • Type: Google BigQuery

    • Service account: Email of your Service account. For more information on this, refer to .

    • Project: Name of your Project. For more information on this, refer to .

    • Key file Path:

      • Upload key file to the server: select Click here to upload if you choose to upload the key through ÃÛ¶¹ÊÓƵ Campaign Classic.

      • Enter manually the key file path: copy/paste your absolute path in this the field if you choose to use a pre-existing key.

    • Dataset: Name of your Dataset. For more information on this, refer to .

The connector supports the following options:

Option
Description
ProxyType
Type of proxy used to connect to BigQuery through ODBC and SDK connectors.
HTTP (default), http_no_tunnel, socks4 and socks5 are currently supported.
ProxyHost
Hostname or IP address where the proxy can be reached.
ProxyPort
Port number the proxy is running on, e.g. 8080
ProxyUid
Username used for the authenticated proxy
ProxyPwd
ProxyUid password
bqpath
Note that this is applicable for bulk-load tool only (Cloud SDK).
To avoid using PATH variable or if the google-cloud-sdk directory has to be moved to another location, you can specify with this option the exact path to the cloud sdk bin directory on the server.
GCloudConfigName
Note that this is applicable starting release 7.3.4 release and for bulk-load tool only (Cloud SDK).
The Google Cloud SDK uses configurations to load data into BigQuery tables. The configuration named accfda stores the parameters for loading the data. However, this option allows users to specify a different name for the configuration.
GCloudDefaultConfigName
Note that this is applicable starting release 7.3.4 release and for bulk-load tool only (Cloud SDK).
The active Google Cloud SDK configuration cannot be deleted without first transferring the active tag to a new configuration. This temporary configuration is necessary to recreate the main configuration for loading data. The default name for the temporary configuration is default, this can be changed if needed.
GCloudRecreateConfig
Note that this is applicable starting release 7.3.4 release and for bulk-load tool only (Cloud SDK).
When set to false, the bulk loading mechanism refrains from attempting to recreate, delete, or modify the Google Cloud SDK configurations. Instead, it proceeds with data loading using the existing configuration on the machine. This feature is valuable when other operations depend on Google Cloud SDK configurations.
If the user enables this engine option without a proper configuration, the bulk loading mechanism will issue a warning message: No active configuration found. Please either create it manually or remove the GCloudRecreateConfig option. To prevent further errors, it will then revert to using the default ODBC Array Insert bulk loading mechanism.
recommendation-more-help
601d79c3-e613-4db3-889a-ae959cd9e3e1