Training: Qlik Talend Cloud

Training Objective

The Qlik Talend Cloud Developer Training is a hands-on 2-day program that equips participants with the practical skills to design, build, and operationalize modern data pipelines using Qlik Talend Cloud. The training covers real-time ingestion using Change Data Capture (CDC), data landing, full pipeline development, and governance topics such as the Qlik Cloud Catalog, Data Quality (Qlik Trust Score™), Data Products and Data Stewardship.

Participants complete hands-on labs reflecting real customer implementations, including replication to Databricks, landing data into Azure Data Lake, and building an entire pipeline from landing to data mart.

Target Audience

This training is ideal for:

  • Data Engineers / Integration Developers working on ingestion, replication and ELT patterns.
  • Lakehouse Engineers responsible for Databricks based ingestion layers.
  • Analytics Engineers delivering curated star schemas and data marts.
  • Data Platform Admins managing connections, spaces, and operational governance.
  • Data Governance & Stewardship teams responsible for lineage, quality scoring, and remediation workflows.

Prerequisites

  • Only basic data modelling knowledge (tables, joins, keys, fact/dimension).
  • No prior Qlik Talend Cloud experience required.
  • No prior CDC knowledge — CDC concepts are taught during Day 1.

What You Will Learn

Participants will be able to:

  • Configure replication projects using Full Load and CDC Apply Changes.
  • Land data into cloud data lakes such as ADLS.
  • Build complete pipeline projects covering Landing → Storage → Transform → Data Mart.
  • Use Catalog for dataset discovery, metadata inspection, lineage, profiling and freshness.
  • Build Data Products, document them, and make them consumable via the internal marketplace.
  • Assess data health using Qlik Trust Score™ (validity, completeness, timeliness, discoverability, usage).
  • Run Data Stewardship Sprints to validate and remediate data issues with human oversight.

Agenda

Day 1 — Replication, Landing & Complete Pipeline Build

Theory

  • Introduction to Qlik Talend Cloud capabilities and architecture.
  • In-depth explanation of Change Data Capture (CDC) and its supported methods.
  • Understanding Replication Projects: connections, dataset selection, schema configuration, transformation rules.
  • Data Landing: landing tasks and staging.
  • Full Pipeline architecture: Landing → Storage → Transform → Data Mart.

Hands-On Exercises

Exercise 1 — Replicate SQL Server to Databricks
Configure Full Load + Apply Changes (CDC), use ADLS staging, add transformation rules (prefix/rename).
Exercise 2 — Land Salesforce Data into Azure Data Lake
Choose datasets, prepare, run, and inspect landed storage objects.
Exercise 3 — Build the Full Pipeline in Databricks
Complete Landing, Storage and Transform tasks, model relationships, create star schemas (fact + Type 2 dimensions), and inspect physical objects.

Day 2 — Data Governance, Data Quality, Data Products & Stewardship

Theory

  • Working with Catalog: metadata inspection, lineage, profiling, quality indicators.
  • Data Quality using Qlik Trust Score™: understanding all dimensions and their relevance.
  • Building and governing Data Products: documentation, dataset grouping, trust score aggregation, marketplace activation.
  • Data Stewardship: sprint creation, assignment, validation and remediation process.

Hands-On Exercises

Exercise 4 — Data Quality Assessment & Trust Score™
Compute data quality metrics, review validity and completeness, adjust freshness thresholds, and analyze dataset anomalies using profiling.
Exercise 5 — Build & Publish a Data Product
Group curated datasets, add metadata, review aggregated trust score, and activate to the internal marketplace.
Exercise 6 — Data Stewardship Sprint
Create a sprint, assign records, validate and remediate data stored in the organization's cloud warehouse.

Cost

  • 1.350 € per participant for the 2 days

More information