Query Service and Data Distiller frequently asked questions
This document answers frequently asked questions about Query Service and Data Distiller. It also includes commonly seen error codes while using the “Queries” product for data validation or writing transformed data back to the data lake. For questions and troubleshooting other ۶Ƶ Experience Platform services, please refer to the Experience Platform troubleshooting guide.
To clarify how Query Service and Data Distiller work together within ۶Ƶ Experience Platform, here are two foundational questions.
What is the relationship between Query Service and Data Distiller?
Query Service and Data Distiller are distinct, complementary components that provide specific data querying capabilities. Query Service is designed for ad hoc queries to explore, validate, and experiment with ingested data without altering the data lake. In contrast, Data Distiller focuses on batch queries that transform and enrich data, with results stored back into the data lake for future use. Batch queries in Data Distiller can be scheduled, monitored, and managed, supporting deeper data processing and manipulation that Query Service alone does not facilitate.
Together, Query Service facilitates rapid insights, while Data Distiller enables in-depth, persistent data transformations.
What is the difference between Query Service and Data Distiller?
Query Service: Used for SQL queries focused on data exploration, validation, and experimentation. Outputs are not stored in the data lake, and execution time is limited to 10 minutes. Ad hoc queries are suited for lightweight, interactive data checks and analyses.
Data Distiller: Enables batch queries that process, clean, and enrich data, with results stored back in the data lake. These queries support longer execution (up to 24 hours) and additional features like scheduling, monitoring, and accelerated reporting. Data Distiller is ideal for in-depth data manipulation and scheduled data processing tasks.
See the Query Service packaging document for more detailed information.
Question categories categories
The following list of answers to frequently asked questions is divided into the following categories:
General Query Service questions general
This section includes information on performance, limits, and processes.
Can I turn off the auto-complete feature in the Query Service Editor?
Why does the Query Editor sometimes become slow when I type in a query?
Can I use Postman for the Query Service API?
Is there a limit to the maximum number of rows returned from a query through the UI?
Can I use queries to update rows?
Is there a data size limit for the resulting output from a query?
How do I bypass the limit on the output number of rows from a SELECT query?
To bypass the output row limit, apply “LIMIT 0” in the query. For example:
code language-sql |
---|
|
How do I stop my queries from timing out in 10 minutes?
One or more of the following solutions are recommended in case of queries timing out.
- Convert the query to a CTAS query and schedule the run. Scheduling a run can be done either through the UI or the API.
- Execute the query on a smaller data chunk by applying additional .
- Execute the EXPLAIN command to gather more details.
- Review the statistics of the data within the dataset.
- Convert the query into a simplified form and re-run using prepared statements.
Is there any issue or impact on Query Service performance if multiple queries run simultaneously?
Can I use reserved keywords as a column name?
ORDER
, GROUP BY
, WHERE
, DISTINCT
. If you want to use these keywords, then you must escape these columns.How do I find a column name from a hierarchical dataset?
The following steps describe how to display a tabular view of a dataset through the UI, including all nested fields and columns in a flattened form.
- After logging into Experience Platform, select Datasets in the left navigation of the UI to navigate to Datasets dashboard.
- The datasets Browse tab opens. You can use the search bar to refine the available options. Select a dataset from the list displayed.
- The Datasets activity screen appears. Select Preview dataset to open a dialog of the XDM schema and tabular view of flattened data from the selected dataset. More details can be found in the preview a dataset documentation
- Select any field from the schema to display its contents in a flattened column. The name of the column is displayed above its contents on the right side of the page. You should copy this name to use for querying this dataset.
See the documentation for full guidance on how to work with nested data structures using the Query Editor or a third-party client.
How do I speed up a query on a dataset that contains arrays?
Why is my CTAS query still processing after many hours for only a small number of rows?
If the query has taken a long time on a very small dataset, please contact customer support.
There can be any number of reasons for a query to be stuck while processing. To determine the exact cause requires an in-depth analysis on a case-by-case basis. Contact ۶Ƶ customer support to being this process.
How do I contact ۶Ƶ customer support? customer-support
is available on the ۶Ƶ help page. Alternatively, help can be found online by completing the following steps:
- Navigate to in your web browser.
- On the right side of the top navigation bar, select Sign In.
- Use your ۶Ƶ ID and password that is registered with your ۶Ƶ license.
- Select Help & Support from the top navigation bar.
A dropdown banner appears containing a Help and support section. Select Contact us to open the ۶Ƶ Customer Care Virtual Assistant, or select Enterprise support for dedicated help for large organizations.
How do I implement a sequential series of jobs, without executing subsequent jobs if the previous job does not complete successfully?
The anonymous block feature allows you to chain one or more SQL statements that are executed in sequence. They also allow for the option of exception-handling.
See the anonymous block documentation for more details.
How do I implement custom attribution in Query Service?
There are two ways to implement custom attribution:
- Use a combination of existing ۶Ƶ-defined functions to identify if the use-case needs are met.
- If the previous suggestion does not satisfy your use case, you should use a combination of window functions. Window functions look at all the events in a sequence. They also allow you to review the historic data and can be used in any combination.
Can I templatize my queries so that I can easily re-use them?
How do I retrieve error logs for a query? error-logs
To retrieve error logs for a specific query, you must first use the Query Service API to fetch the query log details. The HTTP response contains the query IDs that are required to investigate a query error.
Use the GET command to retrieve multiple queries. Information on how to make a call to the API can be found in the sample API calls documentation.
From the response, identify the query you want to investigate and make another GET request using its id
value. Full instructions can be found in the retrieve a query by ID documentation.
A successful response returns HTTP status 200 and contains the errors
array. The response has been shortened for brevity.
code language-json |
---|
|
The provides more information on all available endpoints.
What does “Error validating schema” mean?
The “Error validating schema” message means that the system is unable to locate a field within the schema. You should read the best practice document for organizing data assets in Query Service followed by the Create Table As Select documentation.
The following example demonstrates the use of a CTAS syntax and a struct datatype:
code language-sql |
---|
|
How do I quickly process the new data coming into the system every day?
SNAPSHOT
clause can be used to incrementally read data on a table based on a snapshot ID. This is ideal for use with the incremental load design pattern that only processes information in the dataset that has been created or modified since the last load execution. As a result, it increases processing efficiency and can be used with both streaming and batch data processing.Why is there a difference between the numbers shown in Profile UI and the numbers calculated from the profile export dataset?
The numbers displayed in the profile dashboard are accurate as of the last snapshot. The numbers generated in the profile export table are dependent entirely on the export query. As a result, querying the number of profiles that qualify for a particular audience is a common cause for this discrepancy.
note note |
---|
NOTE |
Querying includes historical data, whereas UI only displays the current profile data. |
Why did my query return an empty subset, and what should I do?
The most likely cause is that your query is too narrow in scope. You should systematically remove a section of the WHERE
clause until you begin seeing some data.
You can also confirm that your dataset contains data by using a small query such as:
code language-sql |
---|
|
Can I sample my data?
What helper functions are supported by Query Service?
Are all native Spark SQL functions supported or are users restricted to only the wrapper Spark SQL functions provided by ۶Ƶ?
Can users define their own user defined functions (UDF) that can be used across other queries?
What should I do if my scheduled query fails?
First, check the logs to find out the details of the error. The FAQ section on finding errors within logs provides more information on how to do this.
You should also check the documentation for guidance on how to perform scheduled queries in the UI and through the API.
Be aware, when using the Query Editor you can only add a schedule to a query that has already been created, and saved. This does not apply to the Query Service API.
What does the “Session Limit Reached” error mean?
How does the query log handle queries relating to a deleted dataset?
How can I get only the metadata for a query?
You can run a query that returns zero rows to get only the metadata in response. This example query returns only the metadata for the specified table.
code language-sql |
---|
|
How can I quickly iterate on a CTAS (Create Table As Select) query without materializing it?
You can create temporary tables to quickly iterate and experiment on a query before materializing it for use. You can also use temporary tables to validate if a query is functional.
For example, you can create a temporary table:
code language-sql |
---|
|
Then you can use the temporary table as follows:
code language-sql |
---|
|
How do I change the time zone to and from a UTC Timestamp?
۶Ƶ Experience Platform persists data in UTC (Coordinated Universal Time) timestamp format. An example of the UTC format is 2021-12-22T19:52:05Z
Query Service supports built-in SQL functions to convert a given timestamp to and from UTC format. Both the to_utc_timestamp()
and the from_utc_timestamp()
methods take two parameters: timestamp and timezone.
table 0-row-2 1-row-2 2-row-2 | |
---|---|
Parameter | Description |
Timestamp | The timestamp can be written in either UTC format or simple {year-month-day} format. If no time is provided, the default value is midnight on the morning of the given day. |
Timezone | The timezone is written in a {continent/city}) format. It must be one of the recognized timezone codes as found in the . |
Convert to the UTC timestamp
The to_utc_timestamp()
method interprets the given parameters and converts it to the timestamp of your local timezone in UTC format. For example, the time zone in Seoul, South Korea is UTC/GMT +9 hours. By providing a date-only timestamp, the method uses a default value of midnight in the morning. The timestamp and timezone are converted into the UTC format from the time of that region to a UTC timestamp of your local region.
code language-sql |
---|
|
The query returns a timestamp in the user’s local time. In this case, 3PM the previous day as Seoul is nine hours ahead.
code language-none |
---|
|
As another example, if the given timestamp was 2021-07-14 12:40:00.0
for the Asia/Seoul
timezone, the returned UTC timestamp would be 2021-07-14 03:40:00.0
The console output provided in the Query Service UI is a more human-readable format:
code language-none |
---|
|
Convert from the UTC timestamp
The from_utc_timestamp()
method interprets the given parameters from the timestamp of your local timezone and provides the equivalent timestamp of the desired region in UTC format. In the example below, the hour is 2:40PM in the user’s local timezone. The Seoul timezone passed as a variable is nine hours ahead of the local timezone.
code language-sql |
---|
|
The query returns a timestamp in UTC format for the timezone passed as a parameter. The result is nine hours ahead of the timezone that ran the query.
code language-none |
---|
|
How should I filter my time-series data?
accordion | |||||
---|---|---|---|---|---|
Answer | |||||
When querying with time-series data, you should use the timestamp filter whenever possible for more accurate analysis.
An example of using the timestamp filter can be seen below:
|
How do I correctly use the CAST
operator to convert my timestamps in SQL queries?
When using the CAST
operator to convert a timestamp, you need to include both the date and time.
For example, missing the time component, as shown below, will result in an error:
code language-sql |
---|
|
The correct usage of the CAST
operator is shown below:
code language-sql |
---|
|
Should I use wildcards, such as * to get all the rows from my datasets?
Should I use NOT IN
in my SQL query?
The NOT IN
operator is often used to retrieve rows that are not found in another table or SQL statement. This operator can slow down performance and may return unexpected results if the columns that are being compared accept NOT NULL
, or you have large numbers of records.
Instead of using NOT IN
, you can use either NOT EXISTS
or LEFT OUTER JOIN
.
For example, if you have the following tables created:
code language-sql |
---|
|
If you are using the NOT EXISTS
operator, you can replicate using the NOT IN
operator by using the following query:
code language-sql |
---|
|
Alternatively, if you are using the LEFT OUTER JOIN
operator, you can replicate using the NOT IN
operator by using the following query:
code language-sql |
---|
|
Can I create a dataset using a CTAS query with a double underscore name like those displayed in the UI? For example: test_table_001
.
How many concurrent queries can you run at a time?
Is there an activity dashboard where you can see query activities and status?
Is there any way to roll back updates? For example, if there is an error or some calculations need reconfiguring when writing data back to Platform, how should that scenario be handled?
How can you optimize queries in ۶Ƶ Experience Platform?
The system does not have indexes as it is not a database but it does have other optimizations in place tied to the data store. The following options are available to tune your queries:
- A time-based filter on timeseries data.
- Optimized push down for the struct data type.
- Optimized cost and memory push-down for arrays and map data types.
- Incremental processing using snapshots.
- A persisted data format.
Can logins be restricted to certain aspects of Query Service or is it an “all or nothing” solution?
Can I restrict what data Query Service can use, or does it simply access the entire ۶Ƶ Experience Platform data lake?
What other options are there for restricting the data that Query Service can access?
There are three approaches to restricting access. They are as follows:
- Use SELECT only statements and give datasets read only access. Also, assign the manage query permission.
- Use SELECT/INSERT/CREATE statements and give datasets write access. Also, assign the query manage permission.
- Use an integration account with the previous suggestions above and assign the query integration permission.
Once the data is returned by Query Service, are there any checks that can be run by Platform to ensure that it hasn’t returned any protected data?
- Query Service supports attribute-based access control. You can restrict access to data at the column/leaf level and/or the struct level. See the documentation to learn more about attribute-based access control.
Can I specify an SSL mode for the connection to a third-party client? For example, can I use use ‘verify-full’ with Power BI?
Do we use TLS 1.2 for all connections from Power BI clients to query service?
Does a connection made on port 80 still use https?
Can I control access to specific datasets and columns for a particular connection? How is this configured?
Does Query Service support the “INSERT OVERWRITE INTO” command?
How frequently is the usage data on the license usage dashboard updated for Data Distiller Compute Hours?
Can I use the CREATE VIEW command without Data Distiller access?
CREATE VIEW
command without Data Distiller access. This command provides a logical view of data but does not write it back to the data lake.Can I use anonymous blocks in DbVisualizer?
Data Distiller data-distiller
How is Data Distiller’s license usage tracked and where can I see this information?
What is a Compute Hour?
How are Compute Hours measured?
Why do I sometimes notice a variation in Compute Hour consumption even when I run the same query consecutively?
Is it normal to notice a reduction in Compute Hours when I run the same query using the same data over a long period of time? Why might this be happening?
Queries UI
The “Create query” is stuck “Initializing connection…” when trying to connect to Query Service. How do I fix the issue?
Dataset Samples
Can I create samples on a system dataset?
Exporting data exporting-data
This section provides information on exporting data and limits.
Is there a way to extract data from Query Service after query processing and save the results in a CSV file? export-csv
Yes. Data can be extracted from Query Service and there is also the option to store the results in CSV format via a SQL command.
There are two ways to save the results of a query when using a PSQL client. You can use the COPY TO
command or create a statement using the following format:
code language-sql |
---|
|
Guidance on the use of the COPY TO
command can be fond in the SQL syntax reference documentation.
Can I extract the content of the final dataset that has been ingested through CTAS queries (assuming these are larger quantities of data such as Terabytes)?
Why is the Analytics data connector not returning data?
A common cause for this problem is querying time-series data without a time filter. For example:
code language-sql |
---|
|
Should be written as:
code language-sql |
---|
|
SQL Syntax
Is MERGE INTO supported by Data Distiller or Query Service?
ITAS Queries
What are ITAS queries?
Third-party tools third-party-tools
This section includes information on the use of third-party tools such as PSQL and Power BI.
Can I connect Query Service to a third-party tool?
Is there a way to connect Query Service once for continuous use with a third-party tool?
Why are my non-expiring credentials are not working?
technicalAccountID
and the credential
taken from the configuration JSON file. The password value takes the form: {{technicalAccountId}:{credential}}
.See the documentation for more information on how to connect to external clients with credentials.
What kind of third-party SQL editors can I connect to Query Service Editor?
Can I connect the Power BI tool to Query Service?
Why do the dashboards take a long time to load when connected to Query Service?
When the system is connected to Query Service, it is connected to an interactive or batch processing engine. This can result in longer loading times to reflect the processed data.
If you would like to improve the response times for your dashboards, you should implement a Business Intelligence (BI) server as a caching layer between Query Service and BI tools. Generally, most BI tools have an additional offering for a server.
The purpose of adding the cache server layer is to cache the data from Query Service and utilize the same for dashboards to speed up the response. This is possible as the results for queries that are executed would be cached in the BI server each day. The caching server then serves these results for any user with the same query to decrease latency. Please refer to the documentation of the utility or third-party tool that you are using for clarification on this setup.
Is it possible to access Query Service using the pgAdmin connection tool?
PostgreSQL API errors postgresql-api-errors
The following table provides PSQL error codes and their possible causes.
AuthenticationCleartextPassword
.CREATE TABLE
statementdbName
: please check the dbName
Why did I receive a 58000 error code when using the history_meta() method on my table?
The history_meta()
method is used to access a snapshot from a dataset. Previously, if you were to run a query on an empty dataset in Azure Data Lake Storage (ADLS), you would receive a 58000 error code saying that the data set does not exist. An example of the old system error is displayed below.
code language-shell |
---|
|
This error occurred because there was no return value for the query. This behavior has now been fixed to return the following message:
code language-text |
---|
|
REST API errors rest-api-errors
The following table provides HTTP error codes and their possible causes.