REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2026-02-05
Claude Sonnet 4.5 and Claude Haiku 4.5 models are now available from AWS Bedrock on Japan georestricted enrollments.
Claude Sonnet 4.5 ↗ is Anthropic's latest medium weight model with strong performance in coding, math, reasoning, and tool calling, all at a reasonable cost and speed.
Claude Haiku 4.5 ↗ is Anthropic's most powerful small model, ideal for real-time, lightweight tasks where speed, cost, and performance are critical.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts in Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-02-05
Compute modules are now generally available in Foundry. With compute modules, you can run containers that scale dynamically based on load, bringing your existing code, in any language, into Foundry without rewriting it.
Compute modules enable several key workflows in Foundry:
Custom functions and APIs: Create functions that can be called from Workshop, Slate, Ontology SDK applications, and other Foundry environments. Host custom or open-source models from platforms like Hugging Face and query them directly from your applications.
Data pipelines: Connect to external data sources and ingest data into Foundry streams, datasets, or media sets in real time. Use your own transformation logic to process data before writing it to outputs.
Legacy code integration: Bring business-critical code written in any language into Foundry without translation. Use this code to back pipelines, Workshop modules, AIP Logic functions, or custom Ontology SDK applications.

An example of a compute module overview in Foundry, with information about the job status, functions, and container metadeta.
Compute modules solve the challenge of integrating existing code into Foundry. Instead of rewriting your logic in a Foundry-supported language, containerize it and deploy it directly. The platform handles scaling, authentication, and connections to other Foundry resources automatically.
Key features include:
Review the compute modules documentation to build your first function or pipeline.
Date published: 2026-02-03
AIP Document Intelligence will be generally available on February 4, 2026 and is enabled by default for all AIP enrollments. AIP Document Intelligence is a low-code application for configuring and deploying document extraction workflows. Users can upload sample documents, experiment with different extraction strategies, and evaluate results based on quality, speed, and cost—all before deploying at scale. AIP Document Intelligence then generates Python transforms that can process entire document collections using the selected strategy, converting PDFs and images into structured Markdown with preserved tables and formatting.
Learn more about AIP Document Intelligence.

Result of Layout-aware OCR + Vision LLM extraction with metrics on cost, speed, and token usage.
AIP Document Intelligence provides multiple extraction approaches, from traditional OCR to vision-language models. You can test each method on your specific documents and view side-by-side comparisons of extraction quality, processing time, and compute costs. This experimentation phase helps teams select the right approach for their use case without writing custom code.

Comparison of Vision LLM Extraction vs. Layout-aware OCR + Vision LLM Extraction shows drastic improvement in complex table extraction quality.
Once a strategy is configured, AIP Document Intelligence generates production-ready Python transforms that process documents at scale. The latest deployment uses lightweight transforms rather than Spark, significantly improving processing speed. Workflows that previously took days extracting data from document collections can now complete the same work in hours. Refer to the documentation for more detailed instruction on how to deploy and customize your Python transforms.

Choose a validated extraction strategy and deploy to a Python transforms to batch process documents.
Enterprise documents vary widely in structure, formatting, and content density. AIP Document Intelligence handles this diversity through configurable extraction strategies that can adapt to multi-column layouts, embedded tables, and mixed-language content. Users working with maintenance manuals, regulatory filings, invoices have successfully extracted structured data while preserving critical formatting and relationships.
AIP Document Intelligence is designed for workflows where document content needs to be extracted and structured for downstream AI applications. This includes:
For workflows that require extracting specific entities (like part numbers, dates, or named entities) rather than full document content, upcoming entity extraction capabilities will provide more targeted functionality.
We want to hear about your experiences using AIP Document Intelligence and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the aip-document-intelligence tag ↗.
Date published: 2026-02-03
GPT-5.2 Codex is now available directly from OpenAI for non-georestricted enrollments.
GPT-5.2 Codex ↗ is a coding optimized version of the GPT-5.2 model from OpenAI, with improvements in agentic coding capabilities, context compaction, and stronger performance on large code changes like refactors and migrations.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-02-03
Workflow Lineage just became a lot more robust - you can now visualize resources across multiple ontologies in one unified graph. Instantly identify cross-ontology relationships, spot external resources at a glance, and switch between ontologies without leaving your workflow view.

The Workflow Lineage graph now displays resources across multiple ontologies, with visual indicators highlighting nodes from outside the selected ontology.

Easily switch between ontologies using the ontology (blue cube) icon to view and navigate all ontologies present in your graph.
For action-type nodes from outside the selected ontology, functionality is limited. For example, bulk updates are only possible for function-backed actions within your currently selected ontology.
We welcome your feedback about Workflow Lineage in our Palantir Support channels, and on our Developer Community ↗ using the workflow-lineage tag ↗.