T-Ingestor is a codeless Data Ingestion tool to extract source data from databases, files, and API data sets and load them to Azure Data Lake & Snowflake, combined with Data Quality, Job Audit, and Ingestion Summary Reports. The tool is configured for on-demand and scheduled ingestion loads.
The accelerator aims to modernize data migration to the cloud platforms to derive data insights faster, optimize the overall cost of data operations, and reduce time to market. Traditional ETL tools lack data foundation capabilities, such as catalog management, metadata management, data quality checking, and separate data pipelines, which are the key differentiators when using T-Ingestor.
Code Quality & Maintenance
Time to Value
Total Cost of Ownership
Set up project details, source & target connections, job scheduling, ingestion design patterns, data quality checks & business/custom rules.
Set up standard business & custom validation rules at specific attributes level.
Set up re-usable adapters for auto-discovery of metadata with manual override function. Users can override data quality & PII checks.
Single view of end-to-end job audit statistics, along with data quality, business/custom rules validation results.
View summarized project/job statistics and monitor regular jobs progress.
DB (Oracle, MySQL, SAP HANA, Hive) and Files (Azure Blob, Google Drive, One Drive, ADLS Gen1) are data sources supported by T-Ingestor.
DB (Azure SQL, Snowflake), File (ADLS Gen1 & Gen2) are supported as data targets on the T-Ingestor tool.
Job start/end timestamps, job run status, output file & rejected files for data quality, business rules, and custom rule checks.
Delimited files, ORC & Parquet file formats are supported.