Skip to main content

Introduction to Workflows

A workflow is the foundation for configuring data processing pipelines. It defines how files are collected, processed, and stored by seamlessly integrating input connectors, processing logic, and output connectors.

How It Works

  1. Input Connectors – The workflow begins by picking up files from a designated input source. This could be a cloud storage location, an API, or a local directory.
  2. Processing Stage – Once the files are ingested, your configured prompts (exported tools) run in sequence, extracting, transforming, or enriching the data as required.
  3. Output Connectors – Finally, the processed data is stored in the configured destination, such as a database, a file system, or another service.

Why Use Workflows?

  • Automation – Streamline repetitive tasks without manual intervention.
  • Scalability – Handle large datasets efficiently with structured processing.
  • Flexibility – Customize workflows by integrating different connectors and processing tools.

By leveraging workflows, you can create an end-to-end automated data pipeline, ensuring efficient and accurate data processing from input to output.