Modeling

How can I get pytesseract to leverage the language library for other languages in a Foundry Transform?

Run pytesseract in a containerized transform and add the language packs in the Docker file, so that pytesseract can find and use them in the Python script.

Timestamp: February 13, 2024

Does Foundry ML Live support autoscaling for containers, and how are these containers managed?

Foundry ML Live does not currently support autoscaling for containers. However, autoscaling support is coming soon with scaling based on request throughput/queueing. The containers are managed in Kubernetes, not just as Docker containers in a Docker daemon, allowing for high availability with a fixed number of replicas/pods.

Timestamp: February 14, 2024

What is the solution to the Tesseract library not found error when calling a deployed model?

The solution is to follow these steps:

  1. Remove the Tesseract libraries completely.
  2. Run a build to cause CI to fail.
  3. Add the libraries back specifying the version rather than just adding and installing.
  4. Run a build again.
  5. After CI successfully passes, submit the model.
  6. Release a new version; this will trigger a new deployment that should work.

Timestamp: February 14, 2024

What should a customer team do if they need to upgrade from Python 3.6/7 but foundry_ml only supports up to Python 3.8 and they are concerned about the upcoming Python 3.8 end-of-life?

The customer team should stick to Python 3.8 for now since foundry_ml plans to increase support to Python 3.9 in near future.

Timestamp: February 13, 2024

Is it possible to train language models like BERT from scratch on Foundry, and what are the GPU capabilities and limitations?

It is possible to train language models on Foundry as Foundry enrollments have basic GPUs and deployment infrastructure. The feasibility depends on the specifics of the project such as data size, latency, and correctness required.

Timestamp: February 13, 2024

Are there any additional costs per query for customers using modeling objective live deployments, apart from the cost to keep the deployment running?

No, there is no cost per query for customers; the only cost is the compute cost while the deployment is running.

Timestamp: February 15, 2024

What are Foundry's AutoMLOps capabilities?

Foundry does not support AutoMLOps as AutoMLOps requires a different architecture and many AutoMLOps steps cannot be automated through code. However, you can perform many parts of AutoMLOps and can write specific code to reproduce the automated parts of AutoMLOps.

Timestamp: February 13, 2024

What could cause the connection refused error when deploying a model, and how can it be resolved?

The connection refused error is usually caused by the model adaptor script using the wrong port. The issue can be resolved by re-uploading the model with the correct port.

Timestamp: February 13, 2024

How can I set the Model API for a Model Version when the adapter is specified by a Python library, and the api() method implementation does not populate the Model Version's Model API?

Currently, there is no way to set the Model API via the frontend similar to the Objective API in the Objectives UI. The solution is to retrain the model and publish a new model version or to copy the model adapter into a code repository to define the Model API.

Timestamp: February 13, 2024

What causes the model training builds to fail with a RemoteException: INTERNAL (Build2), and how can it be resolved?

The reason behind an input resolution error is likely due to a path and resource identifier (RIDs) issue within the repository, specifically if a model output path is reused in a different file. The issue can be resolved by ensuring paths are not reused in different files and, if needed, by re-creating all inputs and output RIDs.

Timestamp: February 13, 2024

What could cause the 'ValueError: Circular reference detected' error when trying to publish a model, and how can it be resolved?

The problem could be caused by a numpy type being outputted by the dictionary, which the Modeling Objective does not accept. Converting the numpy type to a float could resolve the issue.

Timestamp: February 13, 2024

Is there an equivalent of the SegmentedModel from the old foundry_ml in the new Model Assets?

No, there is no canonical built-in class for model segmentation in the new Model Assets. The new approach encourages users to write the adapter they need.

Timestamp: February 13, 2024

What causes the metrics build failure with exit code 137 in the Modelling objective evaluation, and how can it be resolved?

The metrics build failure with exit code 137 is likely caused by an Out Of Memory (OOM) issue. The resolution is to increase the Spark profile resources, specifically by using the MEMORY_LARGE Spark profiles.

Timestamp: February 14, 2024

Does the 'live' feature support bidirectional streaming for real-time transcription use cases?

No, the current API only supports LLM-like token stream output as it is implemented with SSE, which only supports text-based data.

Timestamp: February 14, 2024

Why do fields appear randomly in the output during a production release upgrade?

It is expected to have different outputs during an upgrade because the load balancer will round-robin over all available replicas, which may include both replicas of the old release and the new release. Once the upgrade is complete only the new release should be answering for inference requests.

Timestamp: February 13, 2024

Do users need view permissions on a Modeling Objective to use one of its deployments in an Ontology Function?

Yes, view permissions on the Objective are required to run inference.

Timestamp: March 12, 2024

What could be the possible reasons for inference datasets to not build upon submitting a new model to the modeling objective?

This can happen if the evaluation dataset, input dataset, or model is not imported into the same project as the modeling objective. Ensure that the all of the inputs to the inference dataset are added as project references to the project where the modeling objective is located.

Timestamp: April 9, 2024

Are direct model deployments supported by Deployment Suite?

Direct deployments are not currently supported by Deployment Suite.

Timestamp: September 17, 2024