Connecting to SQL Server
Microsoft SQL Server is a proprietary relational database management system (RDBMS) by Microsoft, built to efficiently store and retrieve data for diverse applications. It supports transaction processing, BI, and analytics, scaling from single machines to large systems. SQL Server 2022 introduces features like Azure-based disaster recovery, Synapse Link, and enhanced scalability, ensuring performance, data integrity, and seamless business operations.
SkyPoint AI Platform (AIP) integrates with SQL Server via an inbuilt connector, enabling centralized data management, streamlined operations, and advanced analytics. This integration breaks data silos, supports direct data operations, and enhances governance for improved compliance and insights, providing a secure and efficient data management solution.
This document will guide you through the process of connecting SQL Server to the Skypoint AI.
Prerequisite
You need the following details to configure and import data using the SQLServer⇗ connector:
- Server name
- Database name
- Port number
- Username
- Password
To import data using the SQL Server connector
Follow the below steps to create and configure a new dataflow for the SQL Server import connector:
- Navigate to Dataflow > Imports.
- Click New dataflow as indicated by an arrow.
The Set dataflow name page appears.
- Enter the desired name for the dataflow in the Name text field.
- Click Next to proceed.
The Choose connector page appears.
Add SQL Server connector
- On the Choose Connector page, use the Search feature to locate and select the SQL Server Connector.
- Enter the Display Name for your dataflow in the provided text field.
- Optionally, add a Description in the designated text area.
- Click Next to proceed.
The Configuration page appears.
Connect to the SQL Server account
- Fill in the required details on the Configuration page.
- Click Connect.
Once the connection is established, the connector can be used to import data from SQL Server tables.
- Scroll down to the Table Details section, select the checkboxes for the tables you wish to import, and use the dropdown menu to label them as either Data or Metadata.
By default, all tables in the Table Details section are selected. You can choose to import only specific tables that are relevant to your data processing needs. For example, to import customer data, select tables containing details like name, email, address, and contact information.
Item | Description |
---|---|
Purpose | Option to assign a purpose (Data or Metadata) for each table. |
Data | Loads customer data |
Metadata | Loads Metadata |
File name | Displays the name of the file that you imported. |
Entity name | Displays the imported table name by default. You can rename it if required. |
- Please scroll down to the bottom of the page to ensure all required tables are selected.
- Click Save to apply the changes.
Congratulations ! for saving the SQL Server connector dataflow, which appears on the Dataflow > Imports page.
Run, Edit, and Delete the imported data
Once the table is imported, you can execute, modify, and remove the imported table from the Dataflow. Follow the below steps:
- Go to the Dataflow > Imports page.
Item | Description |
---|---|
Name | Displays the name of the imported Dataflow. |
Type | Displays connector type symbol. |
Connector Name | Displays connector name. |
Status | Indicates whether the data is imported successfully. |
Tables Count | Displays the number of tables imported. |
Created Date | Displays date of creation. |
Last Refresh Type | Displays the refresh value: Full or Incremental. |
Updated Date | Displays last modified date. |
Last Refresh | Displays the latest refresh date, which updates each time you refresh the data. |
Actions | Provides multiple options for managing dataflows. |
- Select the horizontal ellipsis under the Actions column and do the following:
If you want to | Then |
---|---|
Modify the Dataflow | Select Edit and modify the Dataflow. Click Save to apply your changes. |
Execute the Dataflow | Select Run. |
Bring the data to its previous state | Select Rollback. |
Delete the Dataflow | Select Remove and then click the Delete button. All tables in the data source get deleted. |
See the run history of the dataflow | Select Run history. |
- Click Run to execute the dataflow. Once the execution is successful, the data pipeline status will update to Completed, as illustrated in the figure below.
In the Dataflow's Run History Description, you can view error messages related to data import failures from a data source. Additionally, you can check the status, start time, and end time of the data pipeline execution.
Next step
After completing the data import, start the Master Data Management (MDM) - Resolve process to create a single, unified view of the data. With Skypoint MDM, you can ensure that your data is accurate, consistent, and reliable, making it easier to use for business processes, analytics, and reporting.