What optional properties can a Snowflake user set when creating a virtual warehouse? (Select TWO).
Auto-suspend
Cache size
Default role
Resource monitor
Storage size
When creating a virtual warehouse in Snowflake, users have the option to set several properties to manage its behavior and resource usage. Two of these optional properties are Auto-suspend and Resource monitor.
Auto-suspend:This property defines the period of inactivity after which the warehouse will automatically suspend. This helps in managing costs by stopping the warehouse when it is not in use.
CREATE WAREHOUSE my_warehouse
WITH WAREHOUSE_SIZE = 'XSMALL'
AUTO_SUSPEND = 300; -- Auto-suspend after 5 minutes of inactivity
Resource monitor:Users can assign a resource monitor to a warehouse to control and limit the amount of credit usage. Resource monitors help in setting quotas and alerts for warehouse usage.
CREATE WAREHOUSE my_warehouse
WITH WAREHOUSE_SIZE = 'XSMALL'
RESOURCE_MONITOR = 'my_resource_monitor';
Which Query Profile metrics will provide information that can be used to improve query performance? (Select TWO).
Synchronization
Remote disk IO
Local disk IO
Pruning
Spillage
Two key metrics in Snowflake’sQuery Profilethat provide insights for performance improvement are:
Remote Disk IO: This measures the time the query spends waiting on remote disk access, indicating potential performance issues related to I/O bottlenecks.
Pruning: This metric reflects how effectively Snowflake’s micro-partition pruning is reducing the data scanned. Better pruning (more partitions excluded) leads to faster query performance, as fewer micro-partitions need to be processed.
These metrics are essential for identifying and addressing inefficiencies in data retrieval and storage access, optimizing overall query performance.
What are characteristics of reader accounts in Snowflake? (Select TWO).
Reader account users cannot add new data to the account.
Reader account users can share data to other reader accounts.
A single reader account can consume data from multiple provider accounts.
Data consumers are responsible for reader account setup and data usage costs.
Reader accounts enable data consumers to access and query data shared by the provider.
Characteristics of reader accounts in Snowflake include:
A. Reader account users cannot add new data to the account: Reader accounts are intended for data consumption only. Users of these accounts can query and analyze the data shared with them but cannot upload or add new data to the account.
E. Reader accounts enable data consumers to access and query data shared by the provider: One of the primary purposes of reader accounts is to allow data consumers to access and perform queries on the data shared by another Snowflake account, facilitating secure and controlled data sharing.
This query was executed:
Assuming the weather_events table has not changed, which query would benefit from cached results?
SELECT DISTINCT(severity) FROM weather_events we; if run within 24 hours after the initial query execution
select distinct (severity) from weather_events; if run within 24 hours after the initial query execution
SELECT DISTINCT(severity) FROM weather_events ; if run within 24 hours after the initial query execution
select distinct(severity) from weather_events; if run within 48 hours after the initial query execution
How can a row access policy be applied to a table or a view? (Choose two.)
Within the policy DDL
Within the create table or create view DDL
By future APPLY for all objects in a schema
Within a control table
Using the command ALTER
A row access policy can be applied to a table or a view within the policy DDL when defining the policy. Additionally, an existing row access policy can be applied to a table or a view using the ALTER
Which Snowflake data governance feature supports resource usage monitoring?
Data classification
Column lineage
Access history
Object tagging
Which command will create an ARRAY output from inputs 'a' and 'b'?
ARRAY_CONSTRUCT('a', 'b');
TO_ARRAY('a', 'b');
AS_ARRAY('a', 'b');
LISTAGG('a', 'b');
What is the least-privileged database role needed to view the definition of a secure materialized view?
USAGE_VIEWER
OBJECT_VIEWER
SECURITY_VIEWER
GOVERNANCE_VIEWER
The USAGE_VIEWER role is the minimum required to view metadata and definitions of governance-enabled or secure objects like secure materialized views.
What is the default size for a newly-provisioned Snowpark-optimized virtual warehouse?
X-Large
Large
Medium
Small
By default, Snowpark-optimized warehouses are provisioned with Medium size, unless specified otherwise. This provides balanced compute for handling Snowpark operations.
What is the default authentication method while using the JDBC driver connection in Snowflake?
externalbrowser
snowflake
username_password_mfa
snowflake_jwt
The default authentication mechanism for Snowflake clients (including JDBC) is the snowflake method, which uses username and password authentication.
A directory table that references an external stage is being refreshed manually.
What will happen to any scheduled automatic-refresh operations occurring at the same time as the manual-refresh operations?
The manual-refresh operations and the automatic-refresh operations will both take place.
The automatic-refresh operations will be aborted.
The automatic-refresh operations will be performed more often.
The automatic-refresh operations will continue on the established schedule.
How can a user MINIMIZE Continuous Data Protection costs when using large, high-churn, dimension tables?
Create transient tables and periodically copy them to permanent tables.
Create temporary tables and periodically copy them to permanent tables
Create regular tables with extended Time Travel and Fail-safe settings.
Create regular tables with default Time Travel and Fail-safe settings
To minimize Continuous Data Protection (CDP) costs when dealing with large, high-churn dimension tables in Snowflake, using transient tables is a recommended approach.
Transient Tables: These are designed for data that does not require fail-safe protection. They provide the benefit of reducing costs associated with continuous data protection since they do not have the seven-day Fail-safe period that is mandatory for permanent tables.
Periodic Copying to Permanent Tables: By periodically copying data from transient tables to permanent tables, you can achieve a balance between data protection and cost-efficiency. Permanent tables offer the extended data protection features, including Time Travel and Fail-safe, butthese features can be applied selectively rather than continuously, reducing the overall CDP costs.
Snowflake Documentation on Transient Tables
Snowflake Documentation on Time Travel & Fail-safe
What does the Remote Disk I/O statistic in the Query Profile indicate?
Time spent reading from the result cache.
Time spent reading from the virtual warehouse cache.
Time when the query processing was blocked by remote disk access.
The level of network activity between the Cloud Services layer and the virtual warehouse.
TheRemote Disk I/Ostatistic in the Query Profile reflectstime spent waiting on remote disk access, which can occur when data needs to be retrieved from external storage (remote). This metric is crucial for identifying bottlenecks related to I/O delays, often suggesting a need for performance optimization in data retrieval paths.
The other options relate to caching and network activity, but Remote Disk I/O specifically measures the wait time for data access from remote storage locations.
How should a Snowflake use' configure a virtual warehouse to be in Maximized mode''
Set the WAREHOUSES_SIZE to 6XL.
Set the STATEMENT_TIMEOUT_1M_SECOMES to 0.
Set the MAX_CONCURRENCY_LEVEL to a value of 12 or large.
Set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT.
In Snowflake, configuring a virtual warehouse to be in a "Maximized" mode implies maximizing the resources allocated to the warehouse for its duration. This is done to ensure that the warehouse has a consistent amount of compute resources available, enhancing performance for workloads that require a high level of parallel processing or for handling high query volumes.
To configure a virtual warehouse in maximized mode, you should set the same value for bothMIN_CLUSTER_COUNTandMAX_CLUSTER_COUNT. This configuration ensures that the warehouse operates with a fixed number of clusters, thereby providing a stable and maximized level of compute resources.
Reference to Snowflake documentation on warehouse sizing and scaling:
Warehouse Sizing and Scaling
Understanding Warehouses
Which URL type permits temporary access to a staged file without granting privileges to the stage?
Pre-signed URL
Scoped URL
File URL
Build_Stage_File_URL
When an ACCOUNTADMIN gives a user a custom role, what privilege or privileges is the user granted by default?
All privileges that have been granted to the ACCOUNTADMIN
All privileges on objects allowed by the custom role
Access to the PUBLIC role
Access to all the objects owned by the USERADMIN role
Which type of role can be granted to a share?
Account role
Custom role
Database role
Secondary role
In Snowflake, shares are used to share data between Snowflake accounts. When creating a share, it is possible to grant access to the share to roles within the Snowflake account that is creating the share. The type of role that can be granted to a share is a Custom role. Custom roles are user-defined roles that account administrators can create to manage access control in a more granular way. Unlike predefined roles such as ACCOUNTADMIN or SYSADMIN, custom roles can be tailored with specific privileges to meet the security and access requirements of different groups within an organization.
Granting a custom role access to a share enables users associated with that role to access the shared data if the share is received by another Snowflake account. It is important to carefully manage the privileges granted to custom roles to ensure that data sharing aligns with organizational policies and data governance standards.
What MINIMUM permissions are required to create a pipe in Snowflake? (Select TWO).
CREATE PIPE at the schema level
SELECT and INSERT on the table in the pipe definition
CREATE SCHEMA at the database level
USAGE on the virtual warehouse
OWNERSHIP on the table in the pipe definition
Which command will unload data from a table into an external stage?
PUT
INSERT
COPY INTO
GET
In Snowflake, theCOPY INTO
A Snowflake account holds data that will be consumed by a BI tool. Most of the BI users are located in the United States, but some users will require access from abroad with different levels of concurrency.
What virtual warehouse would be MOST cost-efficient?
A maximized multi-cluster warehouse with the minimum number of clusters that are big enough to handle the expected maximum concurrency.
A large enough standard warehouse able to handle the expected maximum concurrency.
A large enough Snowpark-optimized warehouse that will have sufficient memory to handle the expected maximum concurrency.
An auto-scale, multi-cluster warehouse with a maximum number of clusters that are large enough to handle the expected maximum concurrency.
A table, STUDENTS, is created using this command:
An example value for the marks is:
Which query will return all of the students' names and history marks?
SELECT NAME, MARKS [0] FROM STUDENTS;
SELECT NAME, MARKS:history FROM STUDENTS;
SELECT NAME, GET (MARKS, 'history') FROM STUDENTS;
SELECT NAME, MARKS: HISTORY FROM STUDENTS;
Based on a review of a Query Profile, which scenarios will benefit the MOST from the use of a data clustering key? (Select TWO.)
A column that appears most frequently in order by operations
A column that appears most frequently in where operations
A column that appears most frequently in group by operations
A column that appears most frequently in aggregate operations
A column that appears most frequently in join operations
Which schema-level objects allow the user to periodically perform an action under specific conditions, based on data within Snowflake?
Alerts
External tables
Secure views
Materialized views
A user wants to create objects within a schema but wants to restrict other users' ability to grant privileges on these objects What configuration should be used to create the schema?
Use a regular (non-managed) schema
Use a managed access schema
Use a transient schema.
Set the Def ault_DDL_collation parameter.
How does the authorization associated with a pre-signed URL work for an unstructured file?
Anyone who has the URL can access the referenced file for the life of the token.
Only the user who generates the URL can use the URL to access the referenced file.
Only the users who have roles with sufficient privileges on the URL can access the referenced file.
The role specified in the GET REST API call must have sufficient privileges on the stage to access the referenced file using the URL.
What does Snowflake recommend as a best practice for using secure views?
Use sequence-gen era ted values
Programmatically reveal the identifiers.
Use secure views solely for query convenience.
Do not expose the sequence-generated column(s)
Snowflake recommends not exposing sequence-generated columns in secure views. Secure views are used to protect sensitive data by ensuring that users can only access data for which they have permissions. Exposing sequence-generated columns can potentially reveal information about the underlying data structure or the number of rows, which might be sensitive.
Create Secure Views:Define secure views using theSECUREkeyword to ensure they comply with Snowflake's security policies.
Exclude Sensitive Columns:When creating secure views, exclude columns that might expose sensitive information, such as sequence-generated columns.
CREATE SECURE VIEW secure_view AS
SELECT col1, col2
FROM sensitive_table
WHERE sensitive_column IS NOT NULL;
What does a table with a clustering depth of 1 mean in Snowflake?
The table has only 1 micro-partition.
The table has 1 overlapping micro-partition.
The table has no overlapping micro-partitions.
The table has no micro-partitions.
In Snowflake, a table's clustering depth indicates the degree of micro-partition overlap based on the clustering keys defined for the table. A clustering depth of 1 implies that the table has no overlapping micro-partitions. This is an optimal scenario, indicating that the table's data is well-clustered according to the specified clustering keys. Well-clustered data can lead to more efficient query performance, as it reduces the amount of data scanned during query execution and improves the effectiveness of data pruning.
What kind of authentication do Snowpipe REST endpoints use?
OAuth
Key-based
Username and password
Single Sign-On (SSO)
Snowpipe uses key-based authentication for its REST endpoints. This involves generating and using a key pair (public and private keys) to securely authenticate API requests.
Generate Key Pair:Generate a public and private key pair.
Register Public Key:Register the public key with the Snowflake user that will be making the API requests.
Authenticate Requests:Use the private key to sign API requests sent to Snowpipe REST endpoints.
A Snowflake user wants to design a series of transformations that need to be executed in a specific order on a given schedule.
Which of the snowflake objects should be used?
Pipes
Tasks
Streams
Sequences
What are the recommended alternative data types in Snowflake for unsupported large object data types such as BLOB and CLOB? (Select TWO).
VARIANT
ARRAY
BINARY
OBJECT
VARCHAR
What happens when a value in a single row of a large table is updated?
The affected micro-partition will be modified.
All existing micro-partitions will be modified.
The affected micro-partition will be deleted and recreated.
All micro-partitions will be deleted and recreated.
When is the VARIANT data type suitable when using semi-structured data? (Select TWO).
When there is a need for a predefined schema which cannot be changed after it is created.
When there is a need to add or modify elements within the data structure without interrupting the session.
When the data set is huge and the order of the data elements in the data set does not change.
When the data contains nested data structures like JSON or XML.
When the data consists of images and videos.
Which activities are managed by Slowflake's Cloud Services layer? (Select TWO).
Authorisation
Access delegation
Data pruning
Data compression
Query parsing and optimization
Snowflake's Cloud Services layer is responsible for managing various aspects of the platform that are not directly related to computing or storage. Specifically, it handles authorisation, ensuring that users have appropriate access rights to perform actions or access data. Additionally, it takes care of query parsing and optimization, interpreting SQL queries and optimizing their execution plans for better performance. This layer abstracts much of the platform's complexity, allowing users to focus on their data and queries without managing the underlying infrastructure.
By default, which role can create resource monitors?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
USERADMIN
The role that can by default create resource monitors in Snowflake is theACCOUNTADMINrole. Resource monitors are a crucial feature in Snowflake that allows administrators to track and control the consumption of compute resources, ensuring that usage stays within specified limits. The creation and management of resource monitors involve defining thresholds for credits usage, setting up notifications, and specifying actions to be taken when certain thresholds are exceeded.
Given the significant impact that resource monitors can have on the operational aspects and billing of a Snowflake account, the capability to create and manage them is restricted to theACCOUNTADMINrole. This role has the broadest set of privileges in Snowflake, including the ability to manage all aspects of the account, such as users, roles, warehouses, databases, and resource monitors, among others.
Where is Snowflake metadata stored?
Within the data files
In the virtual warehouse layer
In the cloud services layer
In the remote storage layer
Snowflake’s architecture is divided into three layers: database storage, query processing, and cloud services. The metadata, which includes information about the structure of the data, the SQL operations performed, and the service-level policies, is stored in the cloud services layer. This layer acts as the brain of the Snowflake environment, managing metadata, query optimization, and transaction coordination.
Which task is supported by the use of Access History in Snowflake?
Data backups
Cost monitoring
Compliance auditing
Performance optimization
Access History in Snowflake is primarily utilized for compliance auditing. The Access History feature provides detailed logs that track data access and modifications, includingqueries that read from or write to database objects. This information is crucial for organizations to meet regulatory requirements and to perform audits related to data access and usage.
Role of Access History:Access History logs are designed to help organizations understand who accessed what data and when. This is particularly important for compliance with various regulations that require detailed auditing capabilities.
How Access History Supports Compliance Auditing:
By providing a detailed log of access events, organizations can trace data access patterns, identify unauthorized access, and ensure that data handling complies with relevant data protection laws and regulations.
Access History can be queried to extract specific events, users, time frames, and accessed objects, making it an invaluable tool for compliance officers and auditors.
Which function unloads data from a relational table to JSON?
TRUNC
TRUNC(ID_NUMBER, 5)
ID_NUMBER*100
TO_CHAR
To unload data from a relational table to JSON format, you can use the TO_CHAR function. This function converts a number to a character string, which can then be serialized into JSON format. While there isn't a direct function specifically named for unloading to JSON, converting the necessary fields to a string representation is a common step in preparing data for JSON serialization.
Snowflake Documentation: TO_CHAR Function
Which user preferences can be set for a user profile in Snowsight? (Select TWO).
Multi-Factor Authentication (MFA)
Default database
Default schema
Notification
Username
In Snowsight, Snowflake's web interface, user preferences can be customized to enhance the user experience. Among these preferences, users can set a default database and default schema. These settings streamline the user experience by automatically selecting the specified database and schema when the user initiates a new session or query, reducing the need to manually specify these parameters for each operation. This feature is particularly useful for users who frequently work within a specific database or schema context.
A clustering key was defined on a table, but It is no longer needed. How can the key be removed?
ALTER TABLE