Snowflake Account Usage
299c-表:by Tableau|桜
99e5-表:Description|桜
The Accelerator for Snowflake Cost Monitoring enables Snowflake administrators to analyse the overall compute costs, performance monitoring, user adoption and error tracking of their Snowflake data stores. The Compute Cost Overview dashboard can be used to understand credit burn, plan budget allocation and identify peak outliers to reduce unnecessary spend. Use the User Adoption dashboard to track adoption of your new cloud platform and see where time is being allocated. The Performance Monitoring dashboard and Error tracking dashboard allow you to drill down on specific queries that may be taking longer or timing out.
The Slow Running Queries Dashboard has two versions available for use. The basic version uses persistent information from the ACCOUNT_USAGE table. The advanced version uses custom SQL to generate Query Profiler analysis on common performance issues, such as exploding joins, inefficient pruning, etc. Analysis is limited to the last 14 days. Performance may be slower due to the custom SQL usage, as Query Profiler information is queried upon selection of each mark. If you do not wish to use the advanced version, close the CUSTOM_SQL data source and delete the dashboard to avoid errors.
Answer Key Business Questions
- What is the overall performance of our Snowflake infrastructure? Where is the highest consumption?
- Which warehouses are expending the highest cost?
- How does the compute expense of our database compare with our plan?
- Where are there performance issues with specific users or queries?
- What does user adoption look like? Who uses the database the most, and who is provisioned but doesn’t use it as much as we expected?
- Are there recurring errors to address and optimise?
- Based on current usage, what is the expected contract length?
Monitor and Improve KPIs
- Number of Queries
- Queries per User
- Cloud Server Utilisation Rate
- Time per Query
- Error Rate
- Minutes Wasted on Errors
- Number of Users
- Avg. spend this month, day, last month
- Avg. daily credit usage
- Avg. monthly storage cost
- Cloud Services Layer Utilisation Rate
- Avg. Query Performance
Snowflake + Tableau Partners who can help you succeed today
- Slalom’s expertise with Tableau, Snowflake and Salesforce integrates platforms and delivers business value. Click here to learn more.
- InterWorks helps customers integrate Snowflake into your data ecosystems – connect with them directly to find out how they can help you launch cloud-based visual analytics.
- CoEnterprise provides Snowflake + Tableau Solutions including Snowtrail, which helps you ensure the right level of security for your Snowflake infrastructure, as well as Runaway, which integrates Slack into performance monitoring to help optimise your Snowflake investment.
Connect to Your Snowflake Data to Enable This Accelerator
- Create a new data source in this workbook
- Select “Snowflake” as the data source, and if you have not previously installed the driver, do so now. Once the driver is installed, proceed to the next step.
Create one new data source for each of these schemas: Database: SNOWFLAKE Schema: ACCOUNT_USAGE Table: QUERY_HISTORY Database: SNOWFLAKE Schema: ACCOUNT_USAGE Table: WAREHOUSE_METERING_HISTORY Database: SNOWFLAKE Schema: ACCOUNT_USAGE Table: STORAGE_USAGE
If using the Slow Running Queries Advanced Dashboard:
Create another data source by connecting to Snowflake and choosing the ACCOUNT_USAGE schema. Then, drag a new Custom SQL Table into the canvas. Input the following Custom SQL text (also found in the field “Custom SQL Text” in the data source).
SELECT QUERY_ID, STEP_ID, OPERATOR_ID, PARENT_OPERATOR_ID, OPERATOR_TYPE, OPERATOR_STATISTICS, EXECUTION_TIME_BREAKDOWN, OPERATOR_ATTRIBUTES, EXECUTION_TIME_BREAKDOWN:overall_percentage::float AS OPERATOR_EXECUTION_TIME, OPERATOR_STATISTICS:output_rows output_rows, OPERATOR_STATISTICS:input_rows input_rows, CASE WHEN OPERATOR_STATISTICS:input_rows>0 THEN OPERATOR_STATISTICS:output_rows / OPERATOR_STATISTICS:input_rows ELSE 0 END AS row_multiple, CAST(OPERATOR_STATISTICS:spilling:bytes_spilled_local_storage AS INT) bytes_spilled_local, CAST(OPERATOR_STATISTICS:spilling:bytes_spilled_remote_storage AS INT) bytes_spilled_remote, OPERATOR_STATISTICS:io:percentage_scanned_from_cache::float percentage_scanned_from_cache, OPERATOR_STATISTICS:table_name::string tablename, CAST(OPERATOR_STATISTICS:pruning:partitions_scanned AS INT) partitions_scanned, CAST(OPERATOR_STATISTICS:pruning:partitions_total AS INT) partitions_total, OPERATOR_STATISTICS:pruning:partitions_scanned/OPERATOR_STATISTICS:pruning:partitions_total::float as partition_scan_ratio, CLUSTERING_KEY, LAG(OPERATOR_TYPE) OVER (ORDER BY OPERATOR_ID) LAG_OPERATOR_TYPE FROM TABLE(get_query_operator_stats(<Parameters.QueryID>)) LEFT JOIN SNOWFLAKE_SAMPLE_DATA.INFORMATION_SCHEMA.TABLES t on TABLENAME = t.TABLE_CATALOG || “.” || t.TABLE_SCHEMA || “.” || t.TABLE_NAME ORDER BY STEP_ID,OPERATOR_ID
You may receive an error message, but this is because the parameter to select a valid Query ID has not been updated. You can ignore those error messages until you have replaced the data source.
Please note: As of May 2019 these tables do not contain any cost information pertaining to Materialised Views, Automatic Clustering or Snowpipe. That information is contained in separate tables.
- Once you have connected to all four tables, right-click on the respective sample tables, and replace each with the appropriate table from your own Snowflake schema. For example, right-click on QUERY_HISTORY_SAMPLE and replace it with the QUERY_HISTORY table from your own database.
- When your own data is populating the dashboards, right-click on each of the sample data sources individually, and close them.
4c7a-表:Features|桜
6d57-表:Resources|桜
Download and start your free trial of Tableau today.
Try Tableau Now