Introduction

In order to work with data in Microsoft Fabric, you first need to get data into it. There are several ways to do this. One way is to use a data pipeline. In this tutorial, we will explain step-by-step how to ingest data into a Fabric lakehouse using a data pipeline.

Goal

A CSV File should be ingested into a delta table in the lakehouse.

Source

The source data is a CSV file with the name student.csv stored under "Files" in the lakehouse.

Target

The target is a delta table in the lakehouse.

Step 1: Open Data Engineering Experience

We have already signed in into Fabric and opened the Data Engineering Experience. We have selected the workspace "dlnerds_workspace".

Here you can see the Data Engineering Experience.

Step 2: Create Data Pipeline

We have already created the data pipeline "dlnerds_pipeline". Check out the following post if you want to know how to create a data pipeline.

How to create a Data Pipeline in Microsoft Fabric: A Step-by-Step Guide
Introduction Microsoft Fabric is a powerful All-in-One Data Platform (SaaS) in the Azure Cloud that combines various Azure components to cover the fields of Data Integration, Data Engineering, Data Science and Business Intelligence. One key component of the Microsoft Fabric architecture is the Data Pipeline. In this tutorial, we will

Open the data pipeline. This leads you to the Data Factory interface.

Step 3: Add "Copy data" Activity

Add a "Copy data" activity to the data pipeline. To do this, click on the "Pipeline activity" card and select "Copy data".

A new data pipeline canvas containing a "Copy data" activity will be opened.

You can view this post with the tier: Academy Membership

Join academy now to read the post and get access to the full library of premium posts for academy members only.

Join Academy Already have an account? Sign In