External pipelines are currently in Beta. Functionality may change during ongoing development.
If you're new to Pipeline Builder, review how to create a batch pipeline in Pipeline Builder before proceeding.
Pipeline Builder now offers external pipelines, which push down compute to external compute engines. This functions in a similar manner as compute pushdown in Python transforms, and allows Foundry's pipeline management, data lineage, and security functionality to be used on top of external data warehouse compute. As with compute pushdown in Python transforms, all inputs and outputs from external pipelines must be virtual tables.
Tables built with external compute can be composed together with datasets and tables built with Foundry-native compute using Foundry’s scheduling tools, allowing you to orchestrate complex multi-technology pipelines using the exact right compute at every step along the way.

Currently, Databricks is the only supported external compute engine in Pipeline Builder. To use other external compute engines, such as Snowflake or BigQuery, use transforms with compute pushdown.
| Source type | Status | Notes |
|---|---|---|
| BigQuery | Not available | |
| Databricks | Beta | Serverless (default) or classic compute available. |
| Snowflake | Not available |
All input and output tables must be configured from the same source you selected as part of the pipeline setup.

You can edit your pipeline source and configure source-specific compute options in the build settings panel.
External pipelines do not currently support the full set of features and expressions available in standard batch pipelines.
Currently unsupported features and expressions include: