SQL preview

Use the SQL preview feature to view a quick analysis over structured datasets. The SQL preview consists of an SQL "scratchpad", where you can run read-only SQL queries, including the following:

  • Autocompletion of the dataset schema/column names
  • Search for other datasets within backticks "`" to perform efficient JOIN queries
  • Use editor-friendly features such as keyboard shortcuts to run a highlighted query
  • Output a preview table for results of the executed SQL query
  • Resize columns and the bottom panel to fit your preferences

The SQL preview feature in a dataset preview.

Analyze your SQL data

Follow the steps below to use the SQL preview feature.

  1. Navigate to any tabular dataset.
  2. Select SQL preview from the lower left corner of the screen to open the adjustable preview panel.
  3. In the Code tab, write any read-only query on the dataset (including a join).

The code editor tab in SQL preview.

  • You can search for a dataset for a join by entering the name of the dataset within backticks, "`". A dropdown list of datasets that match the name will appear.

    Running a search for the "titanic" dataset.

  • Hovering over any dataset within the dropdown list will show you the full resource name and file path.

Hovering over the "titanic" dataset name to view the resource path of the dataset.

  1. Run each query either with the Run button above the editor or the Cmd + Enter (macOS) or Ctrl + Enter (Windows) key command. Only one query is allowed to run at a time. If you have multiple queries, highlight one query to run it.

A highlighted query to run in SQL preview.

  1. All queries ran will be automatically saved in the History tab.

The History tab in SQL preview, showing a previously run query.

Compatibility

The SQL engine supports the Spark SQL dialect. In Spark SQL, identifiers such as table names should be quoted using backticks ( ` ) rather than single or double quotes.

For example:

Copied!
1 SELECT column_name FROM \`table_name\`;

For more information on the Spark SQL dialect and its syntax, refer to the official Spark SQL documentation ↗.

Query execution details and limitations

  1. Each query runs on the entire dataset and uses the same compute backend as Contour.
  2. Each query will return a maximum sample of 1,000 rows.