This page provides you with instructions on how to extract data from Sage Intacct and load it into Delta Lake on Databricks. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Sage Intacct?
Sage Intacct provides accounting and financial management software with automation and controls around billing, accounting, and reporting. Components include accounts payable, accounts receivable, cash management, general ledger, order management, and purchasing.
What is Delta Lake?
Delta Lake is an open source storage layer that sits on top of existing data lake file storage, such AWS S3, Azure Data Lake Storage, or HDFS. It uses versioned Apache Parquet files to store data, and a transaction log to keep track of commits, to provide capabilities like ACID transactions, data versioning, and audit history.
Getting data out of Sage Intacct
Sage Intacct provides an API that lets developers retrieve data stored in the platform. Intacct also has a Data Delivery Service (DDS) that enables companies to extract data from the platform and send it to a cloud storage location.
Loading data into Delta Lake on Databricks
To create a Delta table, you can use existing Apache Spark SQL code and change the format from
delta. Once you have a Delta table, you can write data into it using Apache Spark's Structured Streaming API. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table. Databricks provides quickstart documentation that explains the whole process.
Keeping Sage Intacct data up to date
You can code up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.
The key is to build your script in such a way that it can identify incremental updates to your data. Once you've taken new data into account, you can set your script up as a cron job or continuous loop to keep pulling down new data as it appears.
Other data warehouse options
Delta Lake on Databricks is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, or Snowflake, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Panoply, and To S3.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Sage Intacct to Delta Lake on Databricks automatically. With just a few clicks, Stitch starts extracting your Sage Intacct data, structuring it in a way that's optimized for analysis, and inserting that data into your Delta Lake on Databricks data warehouse.