Snowflake: 10 Things Every Snowflake Admin Should be Doing to Optimize Credits

David A Spezia
5 min readAug 29, 2020

Snowflake is architecturally different from almost every traditional database system and cloud data warehouse. Snowflake has completely separated compute from storage and both tiers of the platform are real-time elastic. The need for advanced resource planning, combing over workloads with a fine-toothed comb, and denying new workloads onto the system over fear of disk and CPU limitation just goes away with Snowflake. Being a cloud native data warehouse, Snowflake can instantly scale to meet planned, ad-hoc or surprise growth. This means instead of paying for a fixed limited size of storage and compute, the storage and compute grows and shrinks tightly in-line with you as your needs for Snowflake grows.

By taking advantage of a core tenet of the cloud, elasticity, compute can be dynamically scaled to workloads throughout the day as concurrency or raw compute power fluctuates to meet demand. Storage will grow and shrink over time for databases, tables, and meta-data. There are a few optimizations every Snowflake account administrator should be doing along with more advanced methods to be taken into consideration as your Snowflake footprint and sophistication grows. With compute and storage separated, and completely elastic, these resources should be monitored for usage, surprise growth and resource efficiency.

With the immense power of a completely elastic data platform, comes great responsibility for the Snowflake account administrators. Snowflake is unlimited by default, and some minor account level and resource level restrictions can be put in place to defend against rogue users or suboptimal use of resources and credits. You can proactively control compute at the account, warehouse and ultimately user level through resource managers. Users, databases, tables, queries and workloads can be monitored through the account usage schema shared with all Snowflake accounts.

Alright the above prose worked, or TLDR, either way you are here at the start of your journey. What things should we be doing as responsible Snowflake Admins?

1 — Auto Resume
Make sure all warehouses are set to auto resume. If you are going to implement auto suspend and proper timeout limits, this is a must or users will not be able to query the system and your pager will go off.

2 — Auto Suspend
Make sure all warehouses are set to auto suspend. This way when they are not processing queries your compute footprint will shrink and thus your credit burn.

3 — Long Timeouts
All warehouses should have an appropriate timeout for the workload.
— For Tasks, Loading and ETL/ELT warehouses set to immediate suspension.
— For BI and SELECT query warehouses set to 10 minutes for suspension to keep data caches .warm for end users
— For DevOps, DataOps and Data Science warehouses set to 5 minutes for suspension as warm cache is not as important to ad-hoc and highly unique queries.

4 — Account Statement Timeouts
Set warehouse, account, session and user timeout level statements to your data strategy for long running query tolerances.
STATEMENT_QUEUED_TIMEOUT_IN_SECONDS ( link)
STATEMENT_TIMEOUT_IN_SECONDS ( link)

5 — Warehouse Credit Usage Greater Than 7 Day Average
Now this is a handy one that came from a direct interaction with a customer of mine that set a warehouse to a larger size to do a task, but did not put it back the way he found it. We made this query for him to run every morning to check that things are not out of whack from the 7 day average for a warehouse.

6 — Warehouses Approaching Cloud Service Billing Threshold
I am in love with CTEs, learn to commonly express tables it will change your life and increase SQL performance. This one I am particularly fond of as writing it was fun, and useful. This query will look at warehouses where cloud services spend is a high percentage of the workload. Overall for an account, Snowflake will charge credits for a blended daily overage of 10%. These tasks in cloud services are useful for meta-data operations such as BI tool discovery queries, heartbeat queries, show commands, cache usage and several other service optimizing features. So if you use 100 compute credits in a day, but use 15 additional credits in cloud services (unlikely) you will be charged 5 credits for the day for the 5 that were over the 10% allowance. This means 105 Credits total billed for the day, just look at those 10 free credits you were able to use!

7 — Unused Tables
Gasp, you might have unused tables that are candidates to be dropped. Just make sure no one is querying these tables. I am going to include 3 ways to look at this, so you might want to check all 3 methods as a mandatory step before deletion. Good thing you have time travel setup and can undrop just in case you make an error. This is database context specific, and I am too lazy giving you an opportunity to write a UDF that creates a view with UNION on all of these across your databases. The account usage tables view is not sufficient as Last_Atered there is DDL altered, not DML. Also, be mindful of tables used only in view DDLs.

8 — Dormant Users
It’s just a good idea to purge dormant or users that never logged into Snowflake from the system.

9 — Find Warehouses Without Resource Monitors
Resource Monitors are a great way to proactively control workload budgets and unexpected resource spikes. This can help with both users and service account usage in Snowflake. First you should have separated loading, ELT, BI, Reporting, Data Science and other workloads by warehouse in Snowflake. Accounts and warehouses can have total, yearly, monthly, weekly, and daily credit quotas.

The following queries will identify all warehouses without a recourse monitor:

10 — Apply Resource Monitors
You can use the UI or SQL to apply your resource monitor policy. When monitors are in use they can first notify, then suspend the warehouse or account. It is a good idea to have a monitor to at least notify, if not suspend, on all warehouses. When thresholds are hit all account admins in Snowflake will get an email notification or on screen notification of the threshold breach based on account preferences for the account admins.

Conclusion
Snowflake’s architecture is different and powerful, but with that power comes some ability to be tempted to use too much compute or suboptimal resources for the compute task at hand. Do some of these basic monitoring and resource optimizations to avoid common pitfalls with over credit burn. Happy (efficient) Querying!

Originally published at http://bigdatadave.com on August 29, 2020.

--

--

David A Spezia

David spends his time working with Web-Scale Data Applications built on Snowflake with the largest tenants on the Snowflake Data Cloud.