Case studies

Trailblazing the next era of dependable data

Data Observability Platform

Monte Carlo

Results

130+

Queries on single page load

<100ms

Latency

About

Reliable data is more crucial than ever.

Monte Carlo — a major player in the data observability space since 2020 — plays a vital role in helping companies like jetBlue, Fox, and HubSpot function smoothly by identifying data issues, assisting data teams in resolving issues as they occur, and proactively averting future data-related problems.

Challenge

  • To build a performant, reliable, and fresh analytics dashboard designed for end customers, featuring a native Monte Carlo user experience.

Stack

  • Snowflake, dbt, Python backend, React frontend, Victory charts

Additional Benefits

  • – Expanded from one initial dashboard use case to enriching several screens across their website with data
  • – Minimal setup and maintenance cost; no ETL or databases to manage

Our customers rely upon us to ensure their mission critical data is accurate and reliable whenever they need it. We knew we had all of the data they wanted — it was really just a matter of how we could deliver it with the performance and reliability required of a production application without taking on undue overhead.

 

Our initial question was: what’s the fastest way to deliver a rich data experience in our product to help our customers measure the reliability of their data and demonstrate our own value?

Lior GavishCTO

As Monte Carlo’s customer base matured, their buyers required a clear picture of the value they were getting out of their investment. They needed to know how much better their teams were getting at preventing, detecting, and resolving issues over time.

As a first step, Monte Carlo account teams showed customers important metrics calculated in Snowflake using their internal BI tool over screen shares. These dashboards showed incident rates and response cycles over time, aligning Monte Carlo with their customers on the value of their relationship as well as opportunities for improvement.Their customers were thrilled, and began asking for access to this data and more — all the time.

Other options

Before choosing Patch, the Monte Carlo team considered three options for the job. Embedded analytics, an in-house build, and a semantic layer.

They first explored embedding their BI tool in the product. However, they quickly ran into the tool’s limitations. Priced by seat, it was expensive given the rapidly growing number of users on the Monte Carlo platform. It also wouldn’t be performant enough for production and, crucially. it wasn’t flexible enough to meet their native UX needs. 

Next, they considered an in-house build. After assessing their architectural options – piping the data out of Snowflake into a specialized OLAP engine, tuning the database, and developing a bespoke API –  they estimated the initial build would take over a quarter and the ongoing maintenance overhead was far too high. (For more detail about why building on the data warehouse is hard, take a look at Whelan Boyd’s post here).

The third and final alternative the Monte Carlo team ruled out was a semantic layer. The biggest drawback was the amount of time and effort to define metrics, pre-aggregate data, and configure caches. Anytime a new feature request came in, each piece would have to be modified to support the new filter or metric or time grain. The next issue was that business logic would be sprawled across the data warehouse, dbt, the semantic layer, and their application code. Any way to reduce the complexity and make it easier for future engineers to debug any inevitable issues was welcome.

Implementing Patch

When Monte Carlo first started exploring Patch, they noticed how quickly they could get their application engineers started. Since there’s no need to set up ETL pipelines, scale databases, or configure metrics or caches, the focus could remain on delivering the ideal user experience. 

In Snowflake, Monte Carlo joins data from a variety of sources and models it with dbt. In just a few minutes, they connected Snowflake to Patch and generated the GraphQL APIs that their product engineers would integrate into the Python backend of their application. Since Patch’s APIs support a wide variety of analytics aggregations, string searches, and point reads at ultra low latency, there was no low level data engineering or API development necessary to achieve their performance goals with rich interactivity. End to end, they were able to deliver the first version of the Data Reliability Dashboard to their customers in a matter of weeks.

Over the past year, we’ve added dozens of new charts, metrics, and tables to the Data Reliability Dashboard. We’ve also enriched our homepage and several other screens in the app with insights and figures our customers need in context.

 

In each case, adding the new metrics or filters is extremely simple. The application engineers can simply adjust the query. There’s no dependency on the data team and no pre-aggregations or configurations.

Nicolás Castagnet, Senior Software Engineer at Monte Carlo

Since then, the data reliability dashboard offering has taken center stage — quite literally — becoming a key selling point for their enterprise customer base and prospects. In fact, it’s now featured on their homepage.

So what comes next? 

Monte Carlo is always listening to its customers and has enriched several more screens in their app with data using Patch.  Looking ahead, they’re keen leverage some of Patch’s latest features:

  • Replacing lots of API boilerplate with data packages to reduce code complexity and further streamline schema evolution
  • Client-side caching to further improve performance and reliability
  • Joining with sources beyond Snowflake like Postgres

Monte Carlo and Patch share the belief that data will drive the best user experience. The Patch team is proud to work closely with Lior, Andy, Nico, Sam, and the rest of the Monte Carlo team. If you’d like to try Patch, let’s talk.