New updates and product improvements
You can now export data as Iceberg tables with AWS Glue, AWS S3, and Iceberg REST catalog support.
You can now authenticate to Databricks with OAuth.
In-flight transfers now publish model completion status during each transfer to both the Transfer endpoint and transfer details page in the Admin UI.
Resiliency of clean up for temporary artifacts created in sources and destinations during transfers has been improved.
You can now write Delta Tables to any major object storage destination.
You can now easily rotate keys on Snowflake and other key-pair enabled connections.
Advanced users on Enterprise plans can now tune specific worker sizes to enable higher performance on extra large workloads.
Transfer artifact cleanup performance in destinations has been improved.
Support for Automatic Liquid Clustering on Databricks destinations has been added as the default clustering strategy when enabled on a customer’s workspace.
You can now specify models as append-only where applicable for improved transfer efficiency. This enables more performant destination loading by reducing data scanned on merges across every destination type.
You can now configure partitions on specific model types. This enables more efficient queries on both writing jobs and downstream workloads where query patterns are predictable.
You can now attach arbitrary tags in the form of key-value pairs to transfers. For more information, see the API reference.
You can now configure native Datadog webhooks for Prequel events from the Admin UI.
Enterprise customers now have the option to disable the Prequel scheduler for custom transfer orchestration workflows.
You can now send data to MotherDuck data lakes.
You can now send data to MongoDB databases.
Prequel now redacts sensitive traces from driver outputs, rendering clean error messages and a blame attribute to guide resolution. For more information, see Error Codes.
You can now programmatically generate SSH keys scoped to a given recipient.
You can now trigger transfers in debug mode for more detailed logging for a single transfer when debugging an error.
You can now validate data integrity between your source and destination. This feature will probabilistically verify the integrity of the data by comparing source and destination rows.
You can now utilize windowed transfers to "checkpoint" through larger transfers. This feature is particularly useful for high volume destinations where working through the transfer in "chunks" can unlock value earlier for the data recipient, or offer a indication of progress at transfer time.
Webhook events are now available for successful and cancelled transfers.
You can now receive exports of Prequel usage data directly to any database or warehouse supported by Prequel.
You can now export data to Azure Blob Storage.
You can now add and manage webhooks directly in the Prequel Admin UI.
You can now resolve transfer errors faster with contextual error codes.
You can now incorporate Databricks’ Unity Catalog functionality when you send data to Databricks destinations.
You can now send data to Azure Blob Storage, Cloudflare R2, SQL Server, and Google Sheets.
You can now sync and refresh on a per-destination basis.
You can now split large transfers into smaller chunks.
You can now write optimized source queries to reduce costs.
You can now connect to a source or destination using Role-based access control (RBAC).
You can now send data from multiple sources.
You can now assign multiple products to a single destination.
You can now use webhooks to send notifications to Slack, PagerDuty, and more.
You can now use Prequel with your schema tenanted database.