Seamlessly ingesting data from cloud storage and databases into Savanna.
Data Connectors
TigerGraph Savanna offers a wide array of connectors to bring data from your existing infrastructure into your graph database.
1. Supported Data Sources
Savanna provides optimized connectors for the most common cloud storage and database platforms:
- Object Storage: Amazon S3, Google Cloud Storage (GCS), and Azure Blob Storage.
- Data Warehouses: Snowflake.
- Local: Direct CSV/TSV uploads.
- Generic: JDBC Spark connections for other SQL databases.
2. Ingestion Approaches
There are two ways to load data depending on your complexity:
A. Step-by-Step Guide (No-Code)
A visual wizard that walks you through source configuration, schema mapping, and import options. This is the recommended path for standard file-based ingestion.
B. GSQL Template (Code-First)
For complex data transformations or logic (e.g., regex cleaning, conditional loading), you can use the GSQL Editor to write custom loading jobs.
3. Loading Procedure
- Select Workspace: Choose the active Read-Write workspace for the target database.
- Configure Source: Provide credentials (e.g., S3 Buckets, IAM Roles).
- Map Attributes: Link source columns to vertex/edge attributes.
- Monitor: Track ingestion progress and error logs in real-time.
4. Token Functions
Savanna supports built-in Token Functions during ingestion to:
- Convert timestamps.
- Concatenate strings.
- Perform math operations on incoming numerical data.
[!IMPORTANT] Large-scale data loading should be performed on a workspace sized appropriately for the data volume to ensure high throughput.