Which of the following describes the Snowflake Cloud Services layer?
Coordinates activities in the Snowflake account
Executes queries submitted by the Snowflake account users
Manages quotas on the Snowflake account storage
Manages the virtual warehouse cache to speed up queries
The Snowflake Cloud Services layer is a collection of services that coordinate activities across Snowflake, tying together all the different components to process user requests, from login to query dispatch1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation1
Which services does the Snowflake Cloud Services layer manage? (Choose two.)
Compute resources
Query execution
Authentication
Data storage
Metadata
The Snowflake Cloud Services layer manages various services, including authentication and metadata management. This layer ties together all the different components of Snowflake to process user requests, manage sessions, and control access3.
Which Snowflake SQL statement would be used to determine which users and roles have access to a role called MY_ROLE?
SHOW GRANTS OF ROLE MY_ROLE
SHOW GRANTS TO ROLE MY_ROLE
SHOW GRANTS FOR ROLE MY_ROLE
SHOW GRANTS ON ROLE MY_ROLE
The SQL statement SHOW GRANTS TO ROLE MY_ROLE is used to determine which users and roles have access to a role called MY_ROLE. This statement lists all the privileges granted to the role, including which roles and users can assume MY_ROLE. References: [COF-C02] SnowPro Core Certification Exam Study Guide
The is the minimum Fail-safe retention time period for transient tables?
1 day
7 days
12 hours
0 days
Transient tables in Snowflake have a minimum Fail-safe retention time period of 0 days. This means that once the Time Travel retention period ends, there is no additional Fail-safe period for transient tables
What are the responsibilities of Snowflake's Cloud Service layer? (Choose three.)
Authentication
Resource management
Virtual warehouse caching
Query parsing and optimization
Query execution
Physical storage of micro-partitions
The responsibilities of Snowflake’s Cloud Service layer include authentication (A), which ensures secure access to the platform; resource management (B), which involves allocating and managing compute resources; and query parsing and optimization (D), which improves the efficiency and performance of SQL query execution3.
How can a row access policy be applied to a table or a view? (Choose two.)
Within the policy DDL
Within the create table or create view DDL
By future APPLY for all objects in a schema
Within a control table
Using the command ALTER
A row access policy can be applied to a table or a view within the policy DDL when defining the policy. Additionally, an existing row access policy can be applied to a table or a view using the ALTER
What is the default file size when unloading data from Snowflake using the COPY command?
5 MB
8 GB
16 MB
32 MB
The default file size when unloading data from Snowflake using the COPY command is not explicitly stated in the provided resources. However, Snowflake documentation suggests that the file size can be specified using the MAX_FILE_SIZE option in the COPY INTO
Which data types are supported by Snowflake when using semi-structured data? (Choose two.)
VARIANT
VARRAY
STRUCT
ARRAY
QUEUE
Snowflake supports the VARIANT and ARRAY data types for semi-structured data. VARIANT can store values of any other type, including OBJECT and ARRAY, making it suitable for semi-structured data formats like JSON. ARRAY is used to store an ordered list of elements
Which of the following is a data tokenization integration partner?
Protegrity
Tableau
DBeaver
SAP
Protegrity is listed as a data tokenization integration partner for Snowflake. This partnership allows Snowflake users to utilize Protegrity’s tokenization solutions within the Snowflake environment3.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
Which Snowflake architectural layer is responsible for a query execution plan?
Compute
Data storage
Cloud services
Cloud provider
In Snowflake’s architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.
What features that are part of the Continuous Data Protection (CDP) feature set in Snowflake do not require additional configuration? (Choose two.)
Row level access policies
Data masking policies
Data encryption
Time Travel
External tokenization
Data encryption and Time Travel are part of Snowflake’s Continuous Data Protection (CDP) feature set that do not require additional configuration. Data encryption is automatically applied to all files stored on internal stages, and Time Travel allows for querying and restoring data without any extra setup
What are common issues found by using the Query Profile? (Choose two.)
Identifying queries that will likely run very slowly before executing them
Locating queries that consume a high amount of credits
Identifying logical issues with the queries
Identifying inefficient micro-partition pruning
Data spilling to a local or remote disk
The Query Profile in Snowflake is used to identify performance issues with queries. Common issues that can be found using the Query Profile include identifying inefficient micro-partition pruning (D) and data spilling to a local or remote disk (E). Micro-partition pruning is related to the efficiency of query execution, and data spilling occurs when the memory is insufficient, causing the query to write data to disk, which can slow down the query performance1.
Which tasks are performed in the Snowflake Cloud Services layer? (Choose two.)
Management of metadata
Computing the data
Maintaining Availability Zones
Infrastructure security
Parsing and optimizing queries
The Snowflake Cloud Services layer performs a variety of tasks, including the management of metadata and the parsing and optimization of queries. This layer is responsible for coordinating activities across Snowflake, including user session management, security, and query compilation3.
What affects whether the query results cache can be used?
If the query contains a deterministic function
If the virtual warehouse has been suspended
If the referenced data in the table has changed
If multiple users are using the same virtual warehouse
The query results cache can be used as long as the data in the table has not changed since the last time the query was run. If the underlying data has changed, Snowflake will not use the cached results and will re-execute the query1.
What type of query benefits the MOST from search optimization?
A query that uses only disjunction (i.e., OR) predicates
A query that includes analytical expressions
A query that uses equality predicates or predicates that use IN
A query that filters on semi-structured data types
Search optimization in Snowflake is designed to improve the performance of queries that are selective and involve point lookup operations using equality and IN predicates. It is particularly beneficial for queries that access columns with a high number of distinct values1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
What is the minimum Snowflake edition that has column-level security enabled?
Standard
Enterprise
Business Critical
Virtual Private Snowflake
Column-level security, which allows for the application of masking policies to columns in tables or views, is available starting from the Enterprise edition of Snowflake1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation1
Which of the following describes a Snowflake stored procedure?
They can be created as secure and hide the underlying metadata from the user.
They can only access tables from a single database.
They can contain only a single SQL statement.
They can be created to run with a caller's rights or an owner's rights.
Snowflake stored procedures can be created to execute with the privileges of the role that owns the procedure (owner’s rights) or with the privileges of the role that calls the procedure (caller’s rights). This allows for flexibility in managing security and access control within Snowflake1.
Which of the following significantly improves the performance of selective point lookup queries on a table?
Clustering
Materialized Views
Zero-copy Cloning
Search Optimization Service
The Search Optimization Service significantly improves the performance of selective point lookup queries on tables by creating and maintaining a persistent data structure called a search access path, which allows some micro-partitions to be skipped when scanning the table
When should a multi-cluster warehouse be used in auto-scaling mode?
When it is unknown how much compute power is needed
If the select statement contains a large number of temporary tables or Common Table Expressions (CTEs)
If the runtime of the executed query is very slow
When a large number of concurrent queries are run on the same warehouse
A multi-cluster warehouse should be used in auto-scaling mode when there is a need to handle a large number of concurrent queries. Auto-scaling allows Snowflake to automatically add or remove compute clusters to balance the load, ensuring that performance remains consistent during varying levels of demand
A virtual warehouse is created using the following command:
Create warehouse my_WH with
warehouse_size = MEDIUM
min_cluster_count = 1
max_cluster_count = 1
auto_suspend = 60
auto_resume = true;
The image below is a graphical representation of the warehouse utilization across two days.
What action should be taken to address this situation?
Increase the warehouse size from Medium to 2XL.
Increase the value for the parameter MAX_CONCURRENCY_LEVEL.
Configure the warehouse to a multi-cluster warehouse.
Lower the value of the parameter STATEMENT_QUEUED_TIMEOUT_IN_SECONDS.
The graphical representation of warehouse utilization indicates periods of significant queuing, suggesting that the current single cluster cannot efficiently handle all incoming queries. Configuring the warehouse to a multi-cluster warehouse will distribute the load among multiple clusters, reducing queuing times and improving overall performance1.
References = Snowflake Documentation on Multi-cluster Warehouses1
What are supported file formats for unloading data from Snowflake? (Choose three.)
XML
JSON
Parquet
ORC
AVRO
CSV
The supported file formats for unloading data from Snowflake include JSON, Parquet, and CSV. These formats are commonly used for their flexibility and compatibility with various data processing tools
What is the minimum Snowflake edition required for row level security?
Standard
Enterprise
Business Critical
Virtual Private Snowflake
Row level security in Snowflake is available starting with the Enterprise edition. This feature allows for the creation of row access policies that can control access to data at the row level within tables and views
Which of the following statements apply to Snowflake in terms of security? (Choose two.)
Snowflake leverages a Role-Based Access Control (RBAC) model.
Snowflake requires a user to configure an IAM user to connect to the database.
All data in Snowflake is encrypted.
Snowflake can run within a user's own Virtual Private Cloud (VPC).
All data in Snowflake is compressed.
Snowflake uses a Role-Based Access Control (RBAC) model to manage access to data and resources. Additionally, Snowflake ensures that all data is encrypted, both at rest and in transit, to provide a high level of security for data stored within the platform. References: [COF-C02] SnowPro Core Certification Exam Study Guide
If 3 size Small virtual warehouse is made up of two servers, how many servers make up a Large warehouse?
4
8
16
32
In Snowflake, each size increase in virtual warehouses doubles the number of servers. Therefore, if a size Small virtual warehouse is made up of two servers, a Large warehouse, which is two sizes larger, would be made up of eight servers (2 servers for Small, 4 for Medium, and 8 for Large)2.
Size specifies the amount of compute resources available per cluster in a warehouse. Snowflake supports the following warehouse sizes:
https://docs.snowflake.com/en/user-guide/warehouses-overview.html
Network policies can be set at which Snowflake levels? (Choose two.)
Role
Schema
User
Database
Account
Tables
Network policies in Snowflake can be set at the user level and at the account level2.
What is true about sharing data in Snowflake? (Choose two.)
The Data Consumer pays for data storage as well as for data computing.
The shared data is copied into the Data Consumer account, so the Consumer can modify it without impacting the base data of the Provider.
A Snowflake account can both provide and consume shared data.
The Provider is charged for compute resources used by the Data Consumer to query the shared data.
The Data Consumer pays only for compute resources to query the shared data.
In Snowflake’s data sharing model, any full Snowflake account can both provide and consume shared data. Additionally, the data consumer pays only for the compute resources used to query the shared data. No actual data is copied or transferred between accounts, and shared data does not take up any storage in a consumer account, so the consumer does not pay for data storage1.
References = Introduction to Secure Data Sharing | Snowflake Documentation
What is the following SQL command used for?
Select * from table(validate(t1, job_id => '_last'));
To validate external table files in table t1 across all sessions
To validate task SQL statements against table t1 in the last 14 days
To validate a file for errors before it gets executed using a COPY command
To return errors from the last executed COPY command into table t1 in the current session
The SQL command Select * from table(validate(t1, job_id => '_last')); is used to return errors from the last executed COPY command into table t1 in the current session. It checks the results of the most recent data load operation and provides details on any errors that occurred during that process1.
Which command should be used to download files from a Snowflake stage to a local folder on a client's machine?
PUT
GET
COPY
SELECT
The GET command is used to download files from a Snowflake stage to a local folder on a client’s machine2.
In an auto-scaling multi-cluster virtual warehouse with the setting SCALING_POLICY = ECONOMY enabled, when is another cluster started?
When the system has enough load for 2 minutes
When the system has enough load for 6 minutes
When the system has enough load for 8 minutes
When the system has enough load for 10 minutes
In an auto-scaling multi-cluster virtual warehouse with the SCALING_POLICY set to ECONOMY, another cluster is started when the system has enough load for 2 minutes (A). This policy is designed to optimize the balance between performance and cost, starting additional clusters only when the sustained load justifies it2.
How should a Snowflake use' configure a virtual warehouse to be in Maximized mode''
Set the WAREHOUSES_SIZE to 6XL.
Set the STATEMENT_TIMEOUT_1M_SECOMES to 0.
Set the MAX_CONCURRENCY_LEVEL to a value of 12 or large.
Set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT.
In Snowflake, configuring a virtual warehouse to be in a "Maximized" mode implies maximizing the resources allocated to the warehouse for its duration. This is done to ensure that the warehouse has a consistent amount of compute resources available, enhancing performance for workloads that require a high level of parallel processing or for handling high query volumes.
To configure a virtual warehouse in maximized mode, you should set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT. This configuration ensures that the warehouse operates with a fixed number of clusters, thereby providing a stable and maximized level of compute resources.
Reference to Snowflake documentation on warehouse sizing and scaling:
Warehouse Sizing and Scaling
Understanding Warehouses
Awarding a user which privileges on all virtual warehouses is equivalent to granting the user the global MANAGE WAREHOUSES privilege?
MODIFY, MONITOR and OPERATE privileges
ownership and usage privileges
APPLYBUDGET and audit privileges
MANAGE LISTING ADTOTOLFillment and resolve all privileges
Granting a user the MODIFY, MONITOR, and OPERATE privileges on all virtual warehouses in Snowflake is equivalent to granting the global MANAGE WAREHOUSES privilege. These privileges collectively provide comprehensive control over virtual warehouses.
MODIFY Privilege:
Allows users to change the configuration of the virtual warehouse.
Includes resizing, suspending, and resuming the warehouse.
MONITOR Privilege:
Allows users to view the status and usage metrics of the virtual warehouse.
Enables monitoring of performance and workload.
OPERATE Privilege:
Grants the ability to start and stop the virtual warehouse.
Includes pausing and resuming operations as needed.
References:
Snowflake Documentation: Warehouse Privileges
Which type of workload traditionally benefits from the use of the query acceleration service?
Workloads with a predictable data volume for each query
Workloads that include on-demand data analyses
Queries with small scans and non-selective filters
Queries that do not have filters or aggregation
The query acceleration service in Snowflake is beneficial for workloads that include on-demand data analyses. This service optimizes query performance by dynamically allocating additional resources to execute queries faster, particularly useful for ad-hoc analysis where data volume and complexity can vary.
References:
Snowflake Documentation: Query Acceleration Service
What virtual warehouse configuration should be used when processing a large number of complex queries?
Use the auto-resume feature.
Run the warehouse in auto-scale mode.
Increase the size of the warehouse.
Increase the number of warehouse clusters.
To handle a large number of complex queries, configuring the warehouse in auto-scale mode by increasing the number of warehouse clusters is recommended. This setup allows Snowflake to dynamically add clusters as demand increases, ensuring better performance and concurrency. Increasing the number of clusters provides scalability for concurrent users and heavy workloads, improving response times without impacting individual query performance.
How does the search optimization service help Snowflake users improve query performance?
It scans the micro-partitions based on the joins used in the queries and scans only join columns.
II maintains a persistent data structure that keeps track of the values of the table's columns m each of its micro-partitions.
It scans the local disk cache to avoid scans on the tables used in the Query.
It keeps track of running queries and their results and saves those extra scans on the table.
The search optimization service in Snowflake enhances query performance by maintaining a persistent data structure. This structure indexes the values of table columns across micro-partitions, allowing Snowflake to quickly identify which micro-partitions contain relevant data for a query. By efficiently narrowing down the search space, this service reduces the amount of data scanned during query execution, leading to faster response times and more efficient use of resources.References: Snowflake Documentation on Search Optimization Service
In Snowflake, what allows users to perform recursive queries?
QUALIFY
LATERAL
PIVOT
CONNECT BY
In Snowflake, the CONNECT BY clause allows users to perform recursive queries. Recursive queries are used to process hierarchical or tree-structured data, such as organizational charts or file systems. The CONNECT BY clause is used in conjunction with the START WITH clause to specify the starting point of the hierarchy and the relationship between parent and child rows.
References:
Snowflake Documentation: Hierarchical Queries
When working with table MY_TABLE that contains 10 rows, which sampling query will always return exactly 5 rows?
SELECT * FROM MY_TABLE SAMPLE SYSTEM (5);
SELECT * FROM MY_TABLE SAMPLE BERNOULLI (5);
SELECT * FROM MY_TABLE SAMPLE (5 ROWS);
SELECT * FROM MY_TABLE SAMPLE SYSTEM (1) SEED (5);
In Snowflake, SAMPLE (5 ROWS) ensures an exact count of 5 rows is returned from MY_TABLE, regardless of table size. This is different from SAMPLE SYSTEM or SAMPLE BERNOULLI, which use percentage-based sampling, potentially returning varying row counts based on probabilistic methods.
The ROWS option is deterministic and does not depend on percentage, making it ideal when an exact row count is required.
What should be considered when deciding to use a secure view? (Select TWO).
No details of the query execution plan will be available in the query profiler.
Once created there is no way to determine if a view is secure or not.
Secure views do not take advantage of the same internal optimizations as standard views.
It is not possible to create secure materialized views.
The view definition of a secure view is still visible to users by way of the information schema.
When deciding to use a secure view, several considerations come into play, especially concerning security and performance:
A. No details of the query execution plan will be available in the query profiler: Secure views are designed to prevent the exposure of the underlying data and the view definition to unauthorized users. Because of this, the detailed execution plans for queries against secure views are not available in the query profiler. This is intended to protect sensitive data from being inferred through the execution plan.
C. Secure views do not take advantage of the same internal optimizations as standard views: Secure views, by their nature, limit some of the optimizations that can be applied compared to standard views. This is because they enforce row-level security and mask data, which can introduce additional processing overhead and limit the optimizer's ability to apply certain efficiencies that are available to standard views.
B. Once created, there is no way to determine if a view is secure or not is incorrect because metadata about whether a view is secure can be retrieved from the INFORMATION_SCHEMA views or by using the SHOW VIEWS command.
D. It is not possible to create secure materialized views is incorrect because the limitation is not on the security of the view but on the fact that Snowflake currently does not support materialized views with the same dynamic data masking and row-level security features as secure views.
E. The view definition of a secure view is still visible to users by way of the information schema is incorrect because secure views specifically hide the view definition from users who do not have the privilege to view it, ensuring that sensitive information in the definition is not exposed.
When an object is created in Snowflake. who owns the object?
The public role
The user's default role
The current active primary role
The owner of the parent schema
In Snowflake, when an object is created, it is owned by the role that is currently active. This active role is the one that is being used to execute the creation command. Ownership implies full control over the object, including the ability to grant and revoke access privileges. This is specified in Snowflake's documentation under the topic of Access Control, which states that "the role in use at the time of object creation becomes the owner of the object."
References:
Snowflake Documentation: Object Ownership
Which Snowflake governance feature allows users to assign metadata labels to improve data governance and database access control?
Secure functions
Secure views
Object tagging
Row-level security
Object tagging in Snowflake allows users to assign metadata labels to various database objects to improve data governance and access control. Tags can be used to categorize and manage data based on business needs, helping to enforce governance policies and streamline database administration.
References:
Snowflake Documentation: Object Tagging
How many credits does a size 3X-Large virtual warehouse consume if it runs continuously for 2 hours?
32
64
128
256
In Snowflake, the consumption of credits by a virtual warehouse is determined by its size and the duration for which it runs. A size 3X-Large virtual warehouse consumes 128 credits if it runs continuously for 2 hours. This consumption rate is based on the principle that larger warehouses, capable of providing greater computational resources and throughput, consume more credits per hour of operation. The specific rate of consumption is defined by Snowflake’s pricing model and the scale of the virtual warehouse.References: Snowflake Pricing Documentation
How does the search optimization service improve query performance?
By clustering the tables
By creating a persistent data structure
By using caching
By optimizing the use of micro-partitions
The Search Optimization Service in Snowflake enhances query performance by creating a persistent data structure that enables faster access to specific data, particularly for queries with selective filters on columns not often used in clustering. This persistent structure accelerates data retrieval without depending on clustering or caching, thereby improving response times for targeted queries.
Snowflake's micro-partitioning automatically manages table structure, but search optimization allows further enhancement for certain high-frequency, specific access patterns.
Which Snowflake table is an implicit object layered on a stage, where the stage can be either internal or external?
Directory table
Temporary table
Transient table
A table with a materialized view
A directory table in Snowflake is an implicit object layered on a stage, whether internal or external. It allows users to query the contents of a stage as if it were a table, providing metadata about the files stored in the stage, such as filenames, file sizes, and last modified timestamps.
References:
Snowflake Documentation: Directory Tables
When unloading data, which combination of parameters should be used to differentiate between empty strings and NULL values? (Select TWO).
ESCAPE_UNENCLOSED_FIELD
REPLACE_INVALID_CHARACTERS
FIELD_OPTIONALLY_ENCLOSED_BY
EMPTY_FIELD_AS_NULL
SKIP_BLANK_LINES
When unloading data in Snowflake, it is essential to differentiate between empty strings and NULL values to preserve data integrity. The parameters FIELD_OPTIONALLY_ENCLOSED_BY and EMPTY_FIELD_AS_NULL are used together to address this:
FIELD_OPTIONALLY_ENCLOSED_BY: This parameter specifies the character used to enclose fields, which can differentiate between empty strings (as enclosed fields) and NULLs.
EMPTY_FIELD_AS_NULL: By setting this parameter, Snowflake interprets empty fields as NULL values when unloading data, ensuring accurate representation of NULLs versus empty strings.
These parameters are crucial when exporting data for systems that need explicit differentiation between NULL and empty string values.
Which security models are used in Snowflake to manage access control? (Select TWO).
Discretionary Access Control (DAC)
Identity Access Management (1AM)
Mandatory Access Control (MAC)
Role-Based Access Control (RBAC)
Security Assertion Markup Language (SAML)
Snowflake uses both Discretionary Access Control (DAC) and Role-Based Access Control (RBAC) to manage access control. DAC allows object owners to grant access privileges to other users. RBAC assigns permissions to roles, and roles are then granted to users, making it easier to manage permissions based on user roles within the organization.
References:
Snowflake Documentation: Access Control in Snowflake
In which hierarchy is tag inheritance possible?
Organization » Account» Role
Account » User » Schema
Database » View » Column
Schema » Table » Column
In Snowflake, tag inheritance is a feature that allows tags, which are key-value pairs assigned to objects for the purpose of data governance and metadata management, to be inherited within a hierarchy. The hierarchy in which tag inheritance is possible is from Schema to Table to Column. This means that a tag applied to a schema can be inherited by the tables within that schema, and a tag applied to a table can be inherited by the columns within that table.References: Snowflake Documentation on Tagging and Object Hierarchy
Secured Data Sharing is allowed for which Snowflake database objects? (Select TWO).
Tables
User-Defined Table Functions (UDTFs)
Secure views
Stored procedures
Worksheets
Snowflake allows secure data sharing for specific database objects to ensure data is shared securely and efficiently. The primary objects that can be shared securely are tables and secure views.
Tables: Share actual data stored in tables.
Secure Views: Share derived data while protecting the underlying table structures and any sensitive information.
References:
Snowflake Documentation: Introduction to Secure Data Sharing
Snowflake Documentation: Creating Secure Views
Which command is used to upload data files from a local directory or folder on a client machine to an internal stage, for a specified table?
GET
PUT
CREATE STREAM
COPY INTO
To upload data files from a local directory or folder on a client machine to an internal stage in Snowflake, the PUT command is used. The PUT command takes files from the local file system and uploads them to an internal Snowflake stage (or a specified stage) for the purpose of preparing the data to be loaded into Snowflake tables.
Syntax Example:
PUT file://
This command is crucial for data ingestion workflows in Snowflake, especially when preparing to load data using the COPY INTO command.
Which type of charts are supported by Snowsight? {Select TWO)
Flowcharts
Gantt charts
Line charts
Pie charts
Scatterplots
Which function returns the URL of a stage using the stage name as the input?
BUILD_STAGE_FILE_URL
BUILD_SCOPED_FILE_URL
GET_PRESIGNED_URL
GET STAGE LOCATION
The function in Snowflake that returns the URL of a stage using the stage name as the input is C. GET_PRESIGNED_URL. This function generates a pre-signed URL for a specific file in a stage, enabling secure, temporary access to that file without requiring Snowflake credentials. While the function is primarily used for accessing files in external stages, such as Amazon S3 buckets, it is instrumental in scenarios requiring direct, secure file access for a limited time.
It's important to note that as of my last update, Snowflake's documentation does not specifically list a function named GET_PRESIGNED_URL for directly obtaining a stage's URL by its name. The description aligns closely with functionality available in cloud storage services (e.g., AWS S3's presigned URLs) which can be used in conjunction with Snowflake stages for secure, temporary access to files. For direct interaction with stages and their files, Snowflake offers various functions and commands, but the exact match for generating a presigned URL through a simple function call may vary or require leveraging external cloud services APIs in addition to Snowflake's capabilities.
References:
Snowflake Documentation and cloud services (AWS, Azure, GCP) documentation on presigned URLs and stage interactions.
Which table function is used to perform additional processing on the results of a previously-run query?
QUERY_HISTORY
RESULT_SCAN
DESCRIBE_RESULTS
QUERY HISTORY BY SESSION
The RESULT_SCAN table function is used in Snowflake to perform additional processing on the results of a previously-run query. It allows users to reference the result set of a previous query by its query ID, enabling further analysis or transformations without re-executing the original query.
References:
Snowflake Documentation: RESULT_SCAN
Which statement describes Snowflake tables?
Snowflake tables arc logical representation of underlying physical data.
Snowflake tables are the physical instantiation of data loaded into Snowflake.
Snowflake tables require that clustering keys be defined lo perform optimally.
Snowflake tables are owned by a use.
In Snowflake, tables represent a logical structure through which users interact with the stored data. The actual physical data is stored in micro-partitions managed by Snowflake, and the logical table structure provides the means by which SQL operations are mapped to this data. This architecture allows Snowflake to optimize storage and querying across its distributed, cloud-based data storage system.References: Snowflake Documentation on Tables
Which views are included in the data_sharing_usage schema? (Select TWO).
ACCESS_HISTORY
DATA_TRANSFER_HISTORY
WAREHOUSE_METERING_HISTORY
MONETIZED_USAGE_DAILY
LISTING TELEMETRY DAILY
https://docs.snowflake.com/en/sql-reference/data-sharing-usage
Which Snowflake table type persists until it is explicitly dropped. is available for all users with relevant privileges (across sessions). and has no Fail-safe period?
External
Permanent
Temporary
Transient
The type of Snowflake table that persists until it is explicitly dropped, is available for all users with relevant privileges across sessions, and does not have a Fail-safe period, is a Transient table. Transient tables are designed to provide temporary storage similar to permanent tables but with some reduced storage costs and without the Fail-safe feature, which provides additional data protection for a period beyond the retention time. Transient tables are useful in scenarios where data needs to be temporarily stored for longer than a session but does not require the robust durability guarantees of permanent tables.
Which function determines the kind of value stored in a VARIANT column?
CHECK_JSON
IS_ARRAY
IS_JSON
TYPEOF
The function used to determine the kind of value stored in a VARIANT column in Snowflake is TYPEOF.
Understanding VARIANT Data Type:
VARIANT is a flexible data type in Snowflake that can store semi-structured data, such as JSON, Avro, and XML.
This data type can hold values of different types, including strings, numbers, objects, arrays, and more.
Using TYPEOF Function:
The TYPEOF function returns the data type of the value stored in a VARIANT column.
It helps in identifying the type of data, which is crucial for processing and transforming semi-structured data accurately.
Example Usage:
SELECT TYPEOF(variant_column)
FROM my_table;
This query retrieves the type of data stored in variant_column for each row in my_table.
Possible return values include 'OBJECT', 'ARRAY', 'STRING', 'NUMBER', etc.
Benefits:
Simplifies data processing: Knowing the data type helps in applying appropriate transformations and validations.
Enhances query accuracy: Ensures that operations on VARIANT columns are performed correctly based on the data type.
References:
Snowflake Documentation: TYPEOF
Snowflake Documentation: VARIANT Data Type
Which task is supported by the use of Access History in Snowflake?
Data backups
Cost monitoring
Compliance auditing
Performance optimization
Access History in Snowflake is primarily utilized for compliance auditing. The Access History feature provides detailed logs that track data access and modifications, including queries that read from or write to database objects. This information is crucial for organizations to meet regulatory requirements and to perform audits related to data access and usage.
Role of Access History: Access History logs are designed to help organizations understand who accessed what data and when. This is particularly important for compliance with various regulations that require detailed auditing capabilities.
How Access History Supports Compliance Auditing:
By providing a detailed log of access events, organizations can trace data access patterns, identify unauthorized access, and ensure that data handling complies with relevant data protection laws and regulations.
Access History can be queried to extract specific events, users, time frames, and accessed objects, making it an invaluable tool for compliance officers and auditors.
Which virtual warehouse consideration can help lower compute resource credit consumption?
Setting up a multi-cluster virtual warehouse
Resizing the virtual warehouse to a larger size
Automating the virtual warehouse suspension and resumption settings
Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse
One key strategy to lower compute resource credit consumption in Snowflake is by automating the suspension and resumption of virtual warehouses. Virtual warehouses consume credits when they are running, and managing their operational times effectively can lead to significant cost savings.
A. Setting up a multi-cluster virtual warehouse increases parallelism and throughput but does not directly lower credit consumption. It is more about performance scaling than cost efficiency.
B. Resizing the virtual warehouse to a larger size increases the compute resources available for processing queries, which increases the credit consumption rate. This option does not help in lowering costs.
C. Automating the virtual warehouse suspension and resumption settings: This is a direct method to manage credit consumption efficiently. By automatically suspending a warehouse when it is not in use and resuming it when needed, you can avoid consuming credits during periods of inactivity. Snowflake allows warehouses to be configured to automatically suspend after a specified period of inactivity and to automatically resume when a query is submitted that requires the warehouse.
D. Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse would potentially increase credit consumption by allowing more clusters to run simultaneously. It is used to scale up resources for performance, not to reduce costs.
Automating the operational times of virtual warehouses ensures that you only consume compute credits when the warehouse is actively being used for queries, thereby optimizing your Snowflake credit usage.
Given the statement template below, which database objects can be added to a share?(Select TWO).
GRANT
Secure functions
Stored procedures
Streams
Tables
Tasks
In Snowflake, shares are used to share data across different Snowflake accounts securely. When you create a share, you can include various database objects that you want to share with consumers. According to Snowflake's documentation, the types of objects that can be shared include tables, secure views, secure materialized views, and streams. Secure functions and stored procedures are not shareable objects. Tasks also cannot be shared directly. Therefore, the correct answers are streams (C) and tables (D).
To share a stream or a table, you use the GRANT statement to grant privileges on these objects to a share. The syntax for sharing a table or stream involves specifying the type of object, the object name, and the share to which you are granting access. For example:
GRANT SELECT ON TABLE my_table TO SHARE my_share; GRANT SELECT ON STREAM my_stream TO SHARE my_share;
These commands grant the SELECT privilege on a table named my_table and a stream named my_stream to a share named my_share. This enables the consumer of the share to access these objects according to the granted privileges.
Which role must be used to create resource monitors?
SECURITYADMIN
ACCOUNTADMIN
SYSADMIN
ORGADMIN
In Snowflake, the ACCOUNTADMIN role is required to create resource monitors. Resource monitors are used to manage and monitor the consumption of compute resources. The ACCOUNTADMIN role has the necessary privileges to create, configure, and manage resource monitors across the account.
References:
Snowflake Documentation: Resource Monitors
Which user preferences can be set for a user profile in Snowsight? (Select TWO).
Multi-Factor Authentication (MFA)
Default database
Default schema
Notification
Username
In Snowsight, Snowflake's web interface, user preferences can be customized to enhance the user experience. Among these preferences, users can set a default database and default schema. These settings streamline the user experience by automatically selecting the specified database and schema when the user initiates a new session or query, reducing the need to manually specify these parameters for each operation. This feature is particularly useful for users who frequently work within a specific database or schema context.References: Snowflake Documentation on Snowsight User Preferences
Which statistics on a Query Profile reflect the efficiency of the query pruning? (Select TWO).
Partitions scanned
Partitions total
Bytes spilled
Bytes scanned
Bytes written
In a Snowflake Query Profile, the statistics "Partitions scanned" and "Bytes scanned" reflect the efficiency of query pruning. Query pruning refers to the ability of the query engine to skip unnecessary data, thereby reducing the amount of data that needs to be processed. Efficient pruning results in fewer partitions and bytes being scanned, improving query performance.
References:
Snowflake Documentation: Understanding Query Profiles
Top of Form
Bottom of Form
What is the benefit of using the STRIP_OUTER_ARRAY parameter with the COPY INTO