Skip to main content

Connecting to Snowflake

Snowflake connector enables you to connect to the Snowflake data warehouse. You can connect the Snowflake data warehouse to Skypoint AI Studio and perform various tasks such as data ingestion, unification, transformation, and enrichment to gain insights from the data.

You can use Skypoint AI's built-in connector for importing data from Snowflake. This document will guide you through the process of connecting Snowflake to the Skypoint AI.

Prerequisite

You will need the following details to configure and import data using the Snowflake connector:

  • Account Name
  • Username
  • Password
  • Database Name
  • Schema Name
  • Host
  • Warehouse Name

To import data using the Snowflake connector

Follow the below steps to create and configure a new dataflow for the Snowflake import connector:

  1. Go to Dataflow > Imports.
  2. Click New dataflow.

The Set dataflow name page appears.

Alt image

  1. Enter the dataflow name in the Name text area.
  2. Click Next.

The Choose connector page appears.

Alt image

Add Snowflake connector

  1. In the Choose connector page, select Snowflake connector. You can use the Search feature to find the connector. Also, the Snowflake connector can be found under Cloud and Data Warehousing category.

Alt image

  1. Enter the Display Name for your dataflow in the text area.
  2. You can add in the Description text area.
  3. Click Next.

The Configuration page appears.

Alt image

Connect to the Snowflake account

Follow the below steps to configure the connection to Snowflake:

  1. Enter the Account Name, which is provided by Snowflake when you sign up for an account.
  2. Enter your Username and Password.
  3. Enter the Database Name.
  4. Enter the Schema Name.
  5. Enter Host, which is the hostname or IP address of the Snowflake service.
  6. Enter the Warehouse Name of your Snowflake account that you want to use for executing queries.
  7. Click Connect.

Once the connection is established, you can use the connector to import data from a table in Snowflake.

Alt image

  1. In the Table details section, each table is identified by a row. Click the checkbox to select the table for import and use the dropdown to mark it as Data or Metadata.

note

In the Table Details section, all tables are selected by default. You can choose only the tables you want to import and process the data. For example, to import customer data, select the tables containing customer information such as name, email, address, and contact details.

ItemDescription
PurposeOption to assign a purpose (Data or Metadata) for each table.
Data
Loads customer data
Metadata
Loads Metadata
File nameDisplays the name of the file that you imported.
Entity nameDisplays the imported table name by default. You can rename it later if needed.
  1. Click Save to apply the changes.

After saving the connection, the Snowflake connector appears on the Dataflow > Imports page.

Run, edit, and delete the imported data

Once the table is imported, you can execute, modify, and remove the imported table from the Dataflow. Follow the below steps:

  1. Go to the Dataflow > Imports page.

Alt image

ItemDescription
NameDisplays the name of the imported Dataflow.
TypeDisplays connector type symbol.
StatusIndicates whether the data is imported successfully.
Tables countDisplays the number of tables imported.
Created DateDisplays date of creation.
Last refresh typeDisplays the refresh value: Full or Incremental.
Updated DateDisplays last modified date.
Last RefreshDisplays the latest refresh date, which updates each time you refresh the data.
Group byOption to view the items in a specific group. For example, type, status, tables count, etc.
  1. Select the horizontal ellipsis under the Actions column and do the following:
If you want toThen
Modify the DataflowSelect Edit and modify the Dataflow. Click Save to apply your changes.
Execute the DataflowSelect Run.
Bring the data to its previous stateSelect Rollback.
Delete the DataflowSelect Remove and then click the Delete button. All tables in the data source get deleted.
See the run history of the DataflowSelect Run history.

note

In the Dataflow, you can view the error message for data import failures from a data source under Run History > Description. Also, you can see the status, start time, end time for the execution of data pipeline.

Next step

After completing the data import, start the Master Data Management (MDM) - Resolve process to create a single, unified view of the data. With Skypoint AI's MDM, you can ensure that your data is accurate, consistent, and reliable, making it easier to use for business processes, analytics, and reporting.