We'll give you the bottom line up front: we believe that the current ETL paradigm is broken. Our core belief is that the capability to export data to a warehouse should be powered by the software vendors who generate and maintain that data in the first place.
In the current world, there is no good way for users of B2B software to easily get access to their data. They can spend months writing code that scrapes APIs, or they can purchase third-party ETL tools like Fivetran who will do so on their behalf. But this isn't a satisfactory user experience. As data continues to grow both more important and more ubiquitous, we think the way data is shared between organizations has to evolve.
We like to imagine a world where software vendors make it seamless for their customers to export data to their data warehouse. In terms of user experience, this can be as simple as asking customers to connect their data warehouse, and natively syncing all the data to this destination.
Vendors benefit because they get to own the full cycle of customer interaction and experience. They also get to capture the revenue derived from the data they generate. Customers benefit because they get to easily access their data without procuring a new tool, or writing new code (ie without asking skilled data engineers to write boilerplate API scrapers instead of doing more value-add work).
In this new world, the data is prepared by the vendor for analysis. The vendor defines the data model, and is responsible for upholding the corresponding data contract. Unlike data scraped from an API, it lands in convenient, analysis-ready tables that can be plugged straight into a data viz tool or joined with other datasets.
Another benefit of this state of the world is that vendors and customers are not at the mercy of 3rd party gatekeepers. They don't have to wait until an API-scraping ETL tool such as Fivetran decides to integrate with them. They can take matters into their own hands and proactively choose to offer data exports.
If this is so clearly more elegant, then the question becomes: why doesn't every vendor offer data exports? In other words, why aren't we there yet?
The answer is technical complexity. As it turns out, it's far from trivial for a software provider to offer native data warehouse integration and exports. A vendor has two options today. The first one is to leverage warehouse-native data shares (eg. Snowflake’s Data Sharing), as illustrated by Salesforce’s recent announcement. The downside of this approach is that it only makes data available to customers who use the same data warehouse in the same region.
The other option is to write software that actually exports the data to the customer’s warehouse. Writing this software is deceptively difficult: all of the main data warehouses speak slightly different SQL dialects and offer different ways to connect. Worse yet, they each have different type systems. Then there is the matter of data integrity and reliability. Since this data is used to drive decisions (including financial reporting, sometimes by public companies), its accuracy is critical. Building and maintaining such a system requires a dedicated team of skilled engineers working on it full-time.
Teams that have invested in this capability, such as Segment and Heap, have been handsomely rewarded (just look at the revenue generated by Heap Connect). But most others have decided that it's simply too difficult to prioritize.
This is why we started Prequel. We so believe in a world where data sharing is seamless and ubiquitous that we decided to build a solution that every vendor can use to share data with their customers, so that they wouldn’t have to.
Prequel gives software vendors the ability to provide native data warehouse integration, without having to write a single line of code or do a single prod push. It takes as little as 30 minutes to set up Prequel and start sending data in production (the current record is 27 minutes, we timed it). Once Prequel is set up, the vendor’s customer simply connects their data warehouse, and the data is replicated and kept in sync.
We've tackled the gritty engineering challenges involved with syncing massive amounts of data so that software vendors wouldn't have to. We knew what it would take to get this right, and we wanted to make it available to everyone.
We've built a system that has security, data integrity, and scalability as its core primitives. Our data transfers are fast, reliable, and accurate. We currently power data transfers to companies as large as Fortune 500s and have processed billions of rows and terabytes of data. We know just how precious data is, and we don't take that lightly. We're SOC 2 Type II compliant and certified, we support all enterprise grade security features (up to and including on-prem deployments), and we've undergone a thorough white-box pen test.
Today, we're excited to announce that we're launching Prequel. We've been extremely fortunate to hone our product and work with outstanding hypergrowth teams, like Modern Treasury, Postscript, and SafeBase. Now, we're thrilled to start showing it off to the world and giving new teams access to the platform.
We're incredibly excited to help build a future of B2B software where data flows freely and seamlessly between vendors and their customers. If you'd like to learn more about Prequel, or chat with us about the future of ETL and data sharing, please do get in touch.