Yes; Control Panel provides Foundry administrators the ability to view all images pushed into the platform, as well as view and recall any vulnerabilities present in those images.
Yes, container models can be used within live and batch deployments.
No, there is no standard base image provided nor required. However, all images pushed must adhere to the image requirements.
No; how Foundry interacts with the image will be defined by the model adapter implementation. A common pattern reflected in this example custom adapter is to construct the image to listen on a specific port for input, then have the model adapter send post requests.
Typically, images larger than 22 GB will time out during the Docker push step. If your use case requires a larger image, contact your Palantir representative.
No; all images pushed into the platform must be built for the Linux platform as the entities in the Foundry Kubernetes cluster are Linux machines.
Yes; multiple images can be configured to back a model version, but there is no orchestration support. All containers will be launched simultaneously at execution time and it will not be possible to guarantee container start time ordering.
No. All user-provided container workflows require the Rubix ↗ engine as the backing infrastructure. Also, you need to enable container workflows in Control Panel.
To enable telemetry on your model, create a new model version and toggle on Enable telemetry in the third step of model version creation. The image must have a shell executable in /bin/sh
, and the image must support the shell commands set
and tee
.
No, telemetry for containerized models only works in Python transforms and live deployments, but will not emit container logs for batch deployments.
You can test this by running docker run --entrypoint /bin/sh <EXAMPLE_IMAGE>:<IMAGE_TAG_OR_DIGEST> -c 'set -a && tee -a' && echo "Telemetry compatible"
. An output of Telemetry compatible
indicates telemetry can be enabled for this container.