A data processing pipeline is a series of stages and actions that data goes through in order to be collected, prepared, and presented. An end-to-end data pipeline oversees and handles data at every single step throughout the entire pipeline, from the originating source all the way to the dashboards and analytics that deliver business insights. End-to-end pipelines use programmatic (and often automatic) processes that can handle massive amounts of data very quickly, allowing you to make faster data-driven decisions. Let’s take a look at the processes and workflows in an end-to-end data pipeline before discussing how these processes power business insights.
There are five basic stages in an end-to-end data pipeline:
The first stage is sourcing the data to be processed by the pipeline. The source is typically a database or data stream. Automated data pipelines often use data profiling to evaluate and categorize data before it enters the pipeline.
In the next stage, data is actually ingested by the pipeline. An end-to-end pipeline may use batch ingestion, which pulls in groups of data according to a pre-defined schedule or trigger, or streaming ingestion, which processes data in real-time. Batch ingestion is frequently used to handle very large amounts of data that doesn’t require immediate processing, such as payroll or supply chain records. Streaming ingestion is used when real-time processing is required, such as for ATMs and air traffic control.
In this stage, data from multiple sources is also cleansed, which involves removing duplicate, redundant, or irrelevant data. In some end-to-end data pipelines which use the ETL (extract, transform, load) process, data is transformed into the format required by the destination data warehouse in this stage as well. Other pipelines use ELT (extract, load, transform), which waits until the data reaches its destination before reformatting it. This is typically used with data lakes and cloud-based storage that allows unstructured, raw data.
After ingestion and integration, data is transferred to a storage location. As mentioned above, this will typically be either a data warehouse for structured (filtered) data or a data lake for raw (unfiltered) data. To understand the difference between these two types of storage locations, just look at the names.
In a real, brick-and-mortar warehouse, items are carefully categorized and labeled before being stored in organized shelves and aisles. A data warehouse works the same way—data needs to be formatted, tagged, and structured by an ETL pipeline before it can be stored.
A data lake, on the other hand, works like a real lake, which accepts any water from any streams that feed into it. A data lake can take on any kind of raw, filtered data from any source. Once the data is stored, ELT transforms it as needed for analytics or data science applications.
Now that your data is in its intended location and in the correct format, your analytics, machine learning, business intelligence, and other data science tools can put that data to work. While every application is different, they will generally connect to your data storage via API and query for new data either on-demand (when you push a button) or automatically (based on triggers or a schedule).
Finally, the results from data analysis are delivered to your organization in the form of dashboards, reports, and visualizations. You can then use these analytics to make better, data-driven business decisions.
Using an end-to-end data pipeline to feed data into an analytics or data science application provides you with powerful business insights. Some of the benefits of using these processes include:
When it comes to actually implementing an end-to-end data pipeline, you have two basic choices: purchase an off-the-shelf solution or build your own data pipeline. The former option is usually easier, especially for smaller or inexperienced teams. However, creating a custom data pipeline gives you greater control and flexibility, allowing you to get the most out of your valuable business data.
Level up your Salesforce DevOps skills with our resource library.