
Accelerator Synopsis
T-Ingestor is a codeless Data Ingestion tool to extract source data from databases, files, and API data sets and load them to Azure Data Lake & Snowflake, combined with Data Quality, Job Audit, and Ingestion Summary Reports. The tool is configured for on-demand and scheduled ingestion loads.
The accelerator aims to modernize data migration to the cloud platforms to derive data insights faster, optimize the overall cost of data operations, and reduce time to market. Traditional ETL tools lack data foundation capabilities, such as catalog management, metadata management, data quality checking, and separate data pipelines, which are the key differentiators when using T-Ingestor.
Value Addition

Accelerated Delivery
- Codeless & web-based data ingestion framework.
- Metadata-driven & scalable ingestion pipelines.
- Job audit reporting & monitoring summary dashboards for quick troubleshooting.

Code Quality & Maintenance
- Metadata auto-discovery & tier-based data quality and reconciliation checks.
- Business & custom rules setup for data patterns validation.
- Ready to deploy & ease of integration to a variety of data sources/targets.

Time to Value
- Business Analysts/Data Engineers can conduct quick KPIs discovery.
- Re-usable source/target connections & ingestion pipeline configuration.
- Search feature for easy tracking of project-specific ingestion pipelines.

Increase Productivity
- In-built data foundation capabilities – Catalog, Metadata, and Lineage.
- Enable “Bring Your Own Data (B.Y.O.D.)” capability.
- SSO-enabled user login provides quick onboarding of data analysts.

Increase Productivity
- In-built data foundation capabilities – Catalog, Metadata, and Lineage.
- Enable “Bring Your Own Data (B.Y.O.D.)” capability.
- SSO-enabled user login provides quick onboarding of data analysts.

Total Cost of Ownership
- Easy to configure new source & target connections.
- Significant cost reduction with zero point-to-point interfaces.
Features

Job Configuration
Set up project details, source & target connections, job scheduling, ingestion design patterns, data quality checks & business/custom rules.

Business/Custom Rules
Set up standard business & custom validation rules at specific attributes level.

Metadata discovery
Set up re-usable adapters for auto-discovery of metadata with manual override function. Users can override data quality & PII checks.

Job Auditing
Single view of end-to-end job audit statistics, along with data quality, business/custom rules validation results.

Monitoring Dashboards
View summarized project/job statistics and monitor regular jobs progress.
Sparking AI Innovation at the Intersection of Business Analytics, Data Science, and Engineering.



FAQs
DB (Oracle, MySQL, SAP HANA, Hive) and Files (Azure Blob, Google Drive, One Drive, ADLS Gen1) are data sources supported by T-Ingestor.
DB (Azure SQL, Snowflake), File (ADLS Gen1 & Gen2) are supported as data targets on the T-Ingestor tool.
Job start/end timestamps, job run status, output file & rejected files for data quality, business rules, and custom rule checks.
Delimited files, ORC & Parquet file formats are supported.

