A company has a Snowflake account named ACCOUNTA in AWS us-east-1 region. The company stores its marketing data in a Snowflake database named MARKET_DB. One of the company’s business partners has an account named PARTNERB in Azure East US 2 region. For marketing purposes the company has agreed to share the database MARKET_DB with the partner account.
Which of the following steps MUST be performed for the account PARTNERB to consume data from the MARKET_DB database?
Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.
From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.
Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.
Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner’s account PARTNERB.
Snowflake supports data sharing across regions and cloud platforms using account replication and share replication features. Account replication enables the replication of objects from a source account to one or more target accounts in the same organization. Share replication enables the replication of shares from a source account to one or more target accounts in the same organization1.
To share data from the MARKET_DB database in the ACCOUNTA account in AWS us-east-1 region with the PARTNERB account in Azure East US 2 region, the following steps must be performed:
Create a new account (called AZABC123) in Azure East US 2 region. This account will act as a bridge between the source and the target accounts. The new account must be linked to the ACCOUNTA account using an organization2.
From the ACCOUNTA account, replicate the MARKET_DB database to the AZABC123 account using the account replication feature. This will create a secondary database in the AZABC123 account that is a replica of the primary database in the ACCOUNTA account3.
From the AZABC123 account, set up the data sharing to the PARTNERB account using the share replication feature. This will create a share of the secondary database in the AZABC123 account and grant access to the PARTNERB account. The PARTNERB account can then create a database from the share and query the data4.
Therefore, option C is the correct answer.
Replicating Shares Across Regions and Cloud Platforms : Working with Organizations and Accounts : Replicating Databases Across Multiple Accounts : Replicating Shares Across Multiple Accounts
What built-in Snowflake features make use of the change tracking metadata for a table? (Choose two.)
The MERGE command
The UPSERT command
The CHANGES clause
A STREAM object
The CHANGE_DATA_CAPTURE command
In Snowflake, the change tracking metadata for a table is utilized by the MERGE command and the STREAM object. The MERGE command uses change tracking to determine how to apply updates and inserts efficiently based on differences between source and target tables. STREAM objects, on the other hand, specifically capture and store change data, enabling incremental processing based on changes made to a table since the last stream offset was committed.
A company needs to share its product catalog data with one of its partners. The product catalog data is stored in two database tables: product_category, and product_details. Both tables can be joined by the product_id column. Data access should be governed, and only the partner should have access to the records.
The partner is not a Snowflake customer. The partner uses Amazon S3 for cloud storage.
Which design will be the MOST cost-effective and secure, while using the required Snowflake features?
Use Secure Data Sharing with an S3 bucket as a destination.
Publish product_category and product_details data sets on the Snowflake Marketplace.
Create a database user for the partner and give them access to the required data sets.
Create a reader account for the partner and share the data sets as secure views.
A reader account is a type of Snowflake account that allows external users to access data shared by a provider account without being a Snowflake customer. A reader account can be created and managed by the provider account, and can use the Snowflake web interface or JDBC/ODBC drivers to query the shared data. A reader account is billed to the provider account based on the credits consumed by the queries1. A secure view is a type of view that applies row-level security filters to the underlying tables, and masks the data that is not accessible to the user. A secure view can be shared with a reader account to provide granular and governed access to the data2. In this scenario, creating a reader account for the partner and sharing the data sets as secure views would be the most cost-effective and secure design, while using the required Snowflake features, because:
It would avoid the data transfer and storage costs of using an S3 bucket as a destination, and the potential security risks of exposing the data to unauthorized access or modification.
It would avoid the complexity and overhead of publishing the data sets on the Snowflake Marketplace, and the potential loss of control over the data ownership and pricing.
It would avoid the need to create a database user for the partner and grant them access to the required data sets, which would require the partner to have a Snowflake account and consume the provider’s resources.
Reader Accounts
Secure Views
How can the Snowflake context functions be used to help determine whether a user is authorized to see data that has column-level security enforced? (Select TWO).
Set masking policy conditions using current_role targeting the role in use for the current session.
Set masking policy conditions using is_role_in_session targeting the role in use for the current account.
Set masking policy conditions using invoker_role targeting the executing role in a SQL statement.
Determine if there are ownership privileges on the masking policy that would allow the use of any function.
Assign the accountadmin role to the user who is executing the object.
Snowflake context functions are functions that return information about the current session, user, role, warehouse, database, schema, or object. They can be used to help determine whether a user is authorized to see data that has column-level security enforced by setting masking policy conditions based on the context functions. The following context functions are relevant for column-level security:
current_role: This function returns the name of the role in use for the current session. It can be used to set masking policy conditions that target the current session and are not affected by the execution context of the SQL statement. For example, a masking policy condition using current_role can allow or deny access to a column based on the role that the user activated in the session.
invoker_role: This function returns the name of the executing role in a SQL statement. It can be used to set masking policy conditions that target the executing role and are affected by the execution context of the SQL statement. For example, a masking policy condition using invoker_role can allow or deny access to a column based on the role that the user specified in the SQL statement, such as using the AS ROLE clause or a stored procedure.
is_role_in_session: This function returns TRUE if the user’s current role in the session (i.e. the role returned by current_role) inherits the privileges of the specified role. It can be used to set masking policy conditions that involve role hierarchy and privilege inheritance. For example, a masking policy condition using is_role_in_session can allow or deny access to a column based on whether the user’s current role is a lower privilege role in the specified role hierarchy.
The other options are not valid ways to use the Snowflake context functions for column-level security:
Set masking policy conditions using is_role_in_session targeting the role in use for the current account. This option is incorrect because is_role_in_session does not target the role in use for the current account, but rather the role in use for the current session. Also, the current account is not a role, but rather a logical entity that contains users, roles, warehouses, databases, and other objects.
Determine if there are ownership privileges on the masking policy that would allow the use of any function. This option is incorrect because ownership privileges on the masking policy do not affect the use of any function, but rather the ability to create, alter, or drop the masking policy. Also, this is not a way to use the Snowflake context functions, but rather a way to check the privileges on the masking policy object.
Assign the accountadmin role to the user who is executing the object. This option is incorrect because assigning the accountadmin role to the user who is executing the object does not involve using the Snowflake context functions, but rather granting the highest-level role to the user. Also, this is not a recommended practice for column-level security, as it would give the user full access to all objects and data in the account, which could compromise data security and governance.
Context Functions
Advanced Column-level Security topics
Snowflake Data Governance: Column Level Security Overview
Data Security Snowflake Part 2 - Column Level Security
A healthcare company wants to share data with a medical institute. The institute is running a Standard edition of Snowflake; the healthcare company is running a Business Critical edition.
How can this data be shared?
The healthcare company will need to change the institute’s Snowflake edition in the accounts panel.
By default, sharing is supported from a Business Critical Snowflake edition to a Standard edition.
Contact Snowflake and they will execute the share request for the healthcare company.
Set the share_restriction parameter on the shared object to false.
By default, Snowflake does not allow sharing data from a Business Critical edition to a non-Business Critical edition. This is because Business Critical edition provides enhanced security and data protection features that are not available in lower editions. However, this restriction can be overridden by setting the share_restriction parameter on the shared object (database, schema, or table) to false. This parameter allows the data provider to explicitly allow sharing data with lower edition accounts. Note that this parameter can only be set by the data provider, not the data consumer. Also, setting this parameter to false may reduce the level of security and data protection for the shared data.
Enable Data Share:Business Critical Account to Lower Edition
Sharing Is Not Allowed From An Account on BUSINESS CRITICAL Edition to an Account On A Lower Edition
SQL Execution Error: Sharing is Not Allowed from an Account on BUSINESS CRITICAL Edition to an Account on a Lower Edition
Snowflake Editions | Snowflake Documentation
The Data Engineering team at a large manufacturing company needs to engineer data coming from many sources to support a wide variety of use cases and data consumer requirements which include:
1) Finance and Vendor Management team members who require reporting and visualization
2) Data Science team members who require access to raw data for ML model development
3) Sales team members who require engineered and protected data for data monetization
What Snowflake data modeling approaches will meet these requirements? (Choose two.)
Consolidate data in the company’s data lake and use EXTERNAL TABLES.
Create a raw database for landing and persisting raw data entering the data pipelines.
Create a set of profile-specific databases that aligns data with usage patterns.
Create a single star schema in a single database to support all consumers’ requirements.
Create a Data Vault as the sole data pipeline endpoint and have all consumers directly access the Vault.
To accommodate the diverse needs of different teams and use cases within a company, a flexible and multi-faceted approach to data modeling is required.
Option B:By creating a raw database for landing and persisting raw data, you ensure that the Data Science team has access to unprocessed data for machine learning model development. This aligns with the best practices of having a staging area or raw data zone in a modern data architecture where raw data is ingested before being transformed or processed for different use cases.
Option C:Having profile-specific databases means creating targeted databases that are designed to meet the specific requirements of each user profile or team within the company. For the Finance and Vendor Management teams, the data can be structured and optimized for reporting and visualization. For the Sales team, the database can include engineered and protected data that is suitable for data monetization efforts. This strategy not only aligns data with usage patterns but also helps in managing data access and security policies effectively.
A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.
Which actions can the company take with the inbound share? (Choose two.)
Clone a table from a share.
Grant modify permissions on the share.
Create a table from the shared database.
Create additional views inside the shared database.
Create a table stream on the shared table.
These two actions are possible with an inbound share, according to the Snowflake documentation and the web search results. An inbound share is a share that is created by another Snowflake account (the provider) and imported into your account (the consumer). An inbound share allows you to access the data shared by the provider, but not to modify or delete it. However, you can perform some actions with the inbound share, such as:
Clone a table from a share. You can create a copy of a table from an inbound share using the CREATE TABLE … CLONE statement. The clone will contain the same data and metadata as the original table, but it will be independent of the share. You can modify or delete the clone as you wish, but it will not reflect any changes made to the original table by the provider1.
Create additional views inside the shared database. You can create views on the tables or views from an inbound share using the CREATE VIEW statement. The views will be stored in the shared database, but they will be owned by your account. You can query the views as you would query any other view in your account, but you cannot modify or delete the underlying objects from the share2.
The other actions listed are not possible with an inbound share, because they would require modifying the share or the shared objects, which are read-only for the consumer. You cannot grant modify permissions on the share, create a table from the shared database, or create a table stream on the shared table34.
Cloning Objects from a Share | Snowflake Documentation
Creating Views on Shared Data | Snowflake Documentation
Importing Data from a Share | Snowflake Documentation
Streams on Shared Tables | Snowflake Documentation
Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.
How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)
Use Snowpipe with auto-ingest.
Use a COPY command with a task.
Use a materialized view on an external table.
Use the COPY INTO command.
Use a combination of a task and a stream.
The requirement is for the data to be accessible as quickly as possible after it arrives in the external stage with minimal coding effort.
Option A:Snowpipe with auto-ingest is a service that continuously loads data as it arrives in the stage. With auto-ingest, Snowpipe automatically detects new files as they arrive in a cloud stage and loads the data into the specified Snowflake table with minimal delay and no intervention required. This is an ideal low-maintenance solution for the given scenario where files are arriving at a very high frequency.
Option E:Using a combination of a task and a stream allows for real-time change data capture in Snowflake. A stream records changes (inserts, updates, and deletes) made to a table, and a task can be scheduled to trigger on a very short interval, ensuring that changes are processed into the dashboard tables as they occur.
A Snowflake Architect is designing a multi-tenant application strategy for an organization in the Snowflake Data Cloud and is considering using an Account Per Tenant strategy.
Which requirements will be addressed with this approach? (Choose two.)
There needs to be fewer objects per tenant.
Security and Role-Based Access Control (RBAC) policies must be simple to configure.
Compute costs must be optimized.
Tenant data shape may be unique per tenant.
Storage costs must be optimized.
The Account Per Tenant strategy involves creating separate Snowflake accounts for each tenant within the multi-tenant application. This approach offers a number of advantages.
Option B:With separate accounts, each tenant's environment is isolated, making security and RBAC policies simpler to configure and maintain. This is because each account can have its own set of roles and privileges without the risk of cross-tenant access or the complexity of maintaining a highly granular permission model within a shared environment.
Option D:This approach also allows for each tenant to have a unique data shape, meaning that the database schema can be tailored to the specific needs of each tenant without affecting others. This can be essential when tenants have different data models, usage patterns, or application customizations.
What are characteristics of the use of transactions in Snowflake? (Select TWO).
Explicit transactions can contain DDL, DML, and query statements.
The autocommit setting can be changed inside a stored procedure.
A transaction can be started explicitly by executing a BEGIN WORK statement and ended explicitly by executing a COMMIT WORK statement.
A transaction can be started explicitly by executing a BEGIN TRANSACTION statement and ended explicitly by executing an END TRANSACTION statement.
Explicit transactions should contain only DML statements and query statements. All DDL statements implicitly commit active transactions.
Comprehensive and Detailed Explanation From Exact Extract:
Snowflake supports both implicit and explicit transactions. However, only specific statement types are allowed within transactions.
Option C:
This is correct. In Snowflake, transactions can be started with any of the following: BEGIN, BEGIN WORK, or START TRANSACTION. Transactions can be ended using COMMIT, COMMIT WORK, or ROLLBACK.
Official Extract:
"You can explicitly start a transaction using the BEGIN, BEGIN WORK, or START TRANSACTION statements and end it using the COMMIT, COMMIT WORK, or ROLLBACK statements."
Source:Snowflake SQL Transactions
Option E:
This is correct. Transactions should only include DML statements (INSERT, UPDATE, DELETE, MERGE) and queries. DDL statements (CREATE, ALTER, DROP) automatically commit and cannot be part of an explicit transaction block.
Official Extract:
"A transaction can contain only DML statements and queries. Any DDL statement implicitly commits the current transaction."
Source:Snowflake SQL Transactions
Option A:
Incorrect. DDL statements are not allowed inside explicit transactions. If used, they trigger an implicit commit.
Option B:
Incorrect. The autocommit setting cannot be modified within a stored procedure. Autocommit is session-level and not dynamically changeable within procedural logic.
Option D:
Incorrect. Snowflake does not support END TRANSACTION as a valid SQL command. The correct ending statement for a transaction is COMMIT or ROLLBACK.
An Architect needs to improve the performance of reports that pull data from multiple Snowflake tables, join, and then aggregate the data. Users access the reports using several dashboards. There are performance issues on Monday mornings between 9:00am-11:00am when many users check the sales reports.
The size of the group has increased from 4 to 8 users. Waiting times to refresh the dashboards has increased significantly. Currently this workload is being served by a virtual warehouse with the following parameters:
AUTO-RESUME = TRUE AUTO_SUSPEND = 60 SIZE = Medium
What is the MOST cost-effective way to increase the availability of the reports?
Use materialized views and pre-calculate the data.
Increase the warehouse to size Large and set auto_suspend = 600.
Use a multi-cluster warehouse in maximized mode with 2 size Medium clusters.
Use a multi-cluster warehouse in auto-scale mode with 1 size Medium cluster, and set min_cluster_count = 1 and max_cluster_count = 4.
The most cost-effective way to increase the availability and performance of the reports during peak usage times, while keeping costs under control, is to use a multi-cluster warehouse in auto-scale mode. Option D suggests using a multi-cluster warehouse with 1 size Medium cluster and allowing it to auto-scale between 1 and 4 clusters based on demand. This setup ensures that additional computing resources are available when needed (e.g., during Monday morning peaks) and are scaled down to minimize costs when the demand decreases. This approach optimizes resource utilization and cost by adjusting the compute capacity dynamically, rather than maintaining a larger fixed size or multiple clusters continuously.
An Architect would like to save quarter-end financial results for the previous six years.
Which Snowflake feature can the Architect use to accomplish this?
Search optimization service
Materialized view
Time Travel
Zero-copy cloning
Secure views
Zero-copy cloning is a Snowflake feature that can be used to save quarter-end financial results for the previous six years. Zero-copy cloning allows creating a copy of a database, schema, table, or view without duplicating the data or metadata. The clone shares the same data files as the original object, but tracks any changes made to the clone or the original separately. Zero-copy cloning can be used to create snapshots of data at different points in time, such as quarter-end financial results, and preserve them for future analysis or comparison. Zero-copy cloning is fast, efficient, and does not consume any additional storage space unless the data is modified1.
Zero-Copy Cloning | Snowflake Documentation
Role A has the following permissions:
. USAGE on db1
. USAGE and CREATE VIEW on schemal in db1
. SELECT on tablel in schemal
Role B has the following permissions:
. USAGE on db2
. USAGE and CREATE VIEW on schema2 in db2
. SELECT on table2 in schema2
A user has Role A set as the primary role and Role B as a secondary role.
What command will fail for this user?
use database db1;use schema schemal;create view v1 as select * from db2.schema2.table2;
use database db2;use schema schema2;create view v2 as select * from dbl.schemal. tablel;
use database db2;use schema schema2;select * from db1.schemal.tablel union select * from table2;
use database db1;use schema schemal;select * from db2.schema2.table2;
This command will fail because while the user has USAGE permission ondb2andschema2through Role B, and can create a view inschema2, they do not have SELECT permission ondb1.schemal.table1with Role B. Since Role A, which has SELECT permission ondb1.schemal.table1, is not the currently active role when the viewv2is being created indb2.schema2, the user does not have the necessary permissions to read fromdb1.schemal.table1to create the view. Snowflake’s security model requires that the active role have all necessary permissions to execute the command.
What transformations are supported in the below SQL statement? (Select THREE).
CREATE PIPE ... AS COPY ... FROM (...)
Data can be filtered by an optional where clause.
Columns can be reordered.
Columns can be omitted.
Type casts are supported.
Incoming data can be joined with other tables.
The ON ERROR - ABORT statement command can be used.
The SQL statement is a command for creating a pipe in Snowflake, which is an object that defines the COPY INTO