3  Data Flows

This section summarizes the supported data movement patterns for a custom integration.

3.1 Ingestion Flow

  1. The customer and Quiver agree which datasets are included in scope.
  2. The customer and Quiver select the delivery pattern for each dataset.
  3. The customer grants the agreed access or delivers files to the agreed Quiver location.
  4. Quiver configures the connector with approved credentials, file locations, source identifiers, and schedule.
  5. The connector extracts or processes the selected entities.
  6. Raw and normalized records are stored in the customer-specific data destination.
  7. Mapping and transformation jobs prepare the curated models used by planning, KPIs, and visualizations.

3.2 Data Categories

Category Examples Initial Handling
Master data Products, customers, suppliers Full snapshot or incremental load depending on source support
Transactions Orders, invoices, inventory movements Incremental load
Reference data Dimensions, statuses, classifications Full snapshot or incremental load depending on source support
Current-state snapshots Inventory balances, open orders Full snapshot or incremental load depending on source support

3.3 Synchronization Patterns

Quiver supports both customer-initiated delivery (Push) and Quiver-initiated extraction (Pull). A custom integration can also use a hybrid approach where different datasets use different patterns.

Pattern When It Fits Customer Provides Quiver Provides
SFTP file push The customer already has an approved export process or wants to control extraction from source systems. Scheduled file export and delivery to the agreed SFTP location. Managed SFTP endpoint, file processing, validation, ingestion, and mapping.
API or OData pull A stable approved endpoint exists and can expose the agreed datasets. Endpoint details, credentials, access grants, rate limits, and source documentation. Scheduled connector, authentication handling, extraction, retries, and mapping.
Database or data warehouse pull The preferred source is a reporting schema, staging schema, or curated data warehouse. Approved network path, read-only credentials, and schema documentation. Scheduled connector, ingestion, validation, and mapping.
File download pull Files are generated by the customer but should be retrieved by Quiver from an approved location. File location, credentials or signed access, naming conventions, and delivery schedule. Scheduled retrieval, file processing, validation, and mapping.
Hybrid by dataset Different datasets already exist in different approved interfaces. Dataset-by-dataset interface decision and source ownership. Combination of above

Real-time streaming, webhook-based event push, and write-back to customer source systems are not part of the default initial integration scope. They can be reviewed separately if required.

3.3.1 Push

In the push pattern, the customer exports data from their source systems and delivers files to a Quiver-managed SFTP location.

This pattern is used when the customer wants to control extraction from the source environment, when direct external access is not available, or when the source system already has an approved export process. For ERP-based integrations, this is often the lowest-effort starting point because it builds on existing approved export processes.

Supported push deliveries are based on flat files in standard data formats, including:

  • CSV and other delimited text formats.
  • JSON.
  • XML, including SAP IDoc-style XML exports (e.g., material master, order, or inventory IDocs).

The customer is responsible for generating and delivering the files according to the agreed schedule. Quiver is responsible for receiving the files, validating the delivery, ingesting the contents, and transforming the data into the agreed downstream models.

SFTP access is provisioned for the agreed integration scope and should be limited to the users, service accounts, and source environments required for the delivery. Files are transferred over encrypted transport and processed into the customer-specific data destination.

sequenceDiagram
  participant CustomerJob as Customer Export Job
  participant SFTP as Quiver-Managed SFTP
  participant Connector as Quiver Connector
  participant Staging as Data Staging
  participant Mapping as Mapping Layer
  participant App as Quiver Environments
  participant Ops as Quiver Operations

  CustomerJob->>SFTP: Deliver agreed files
  Connector->>SFTP: Pick up new files on schedule
  Connector->>Connector: Validate file presence and structure
  Connector->>Staging: Load raw and normalized records
  Mapping->>Staging: Read staged data
  Mapping->>App: Publish curated models
  Connector-->>Ops: Alert on missing files or validation failures
Figure 3.1: SFTP push sequence

3.3.2 Pull

In the pull pattern, Quiver retrieves data from approved customer sources according to the agreed schedule and access model.

Supported pull sources include:

  • APIs, including common API standards such as OData.
  • Databases, where network access and credentials have been approved.
  • Flat files made available for download from approved locations.

Quiver supports a broad range of source formats and access patterns. Where a standard connector or protocol is available, Quiver will use that as the preferred integration approach. Where the source interface is more specialized, Quiver can adapt the integration approach to match the available customer-side interface, subject to implementation review and agreement on scope.

Pull integrations use approved credentials and secure transport for the source system in question. Network access, API permissions, database permissions, and service accounts should be restricted to the datasets and operations required for the integration.

sequenceDiagram
  participant Orchestrator as Quiver Orchestrator
  participant Connector as Quiver Connector
  participant Source as Approved Customer Source
  participant Staging as Data Staging
  participant Mapping as Mapping Layer
  participant App as Quiver Environments
  participant Ops as Quiver Operations

  Orchestrator->>Connector: Start scheduled synchronization
  Connector->>Source: Authenticate and request agreed datasets
  Source-->>Connector: Return source records or files
  Connector->>Connector: Validate freshness, schema, and volume
  Connector->>Staging: Load raw and normalized records
  Mapping->>Staging: Read staged data
  Mapping->>App: Publish curated models
  Connector-->>Ops: Alert on access, schema, or data issues
Figure 3.2: Scheduled pull sequence

3.4 Data Scoping And Filtering

The customer and Quiver agree the business scope for the integration before production use. Typical filter dimensions include company code, business unit, plant, warehouse or product hierarchy.

WHere the customer approves delivery of a broader dataset, Quiver can apply agreed filtering rules in the customer-specific mapping layer.

3.5 Cadence And Load Strategy

Daily synchronization is the default for most planning integrations. More frequent synchronization can be supported when the source system, customer access model, and business requirements justify it.

Typical load strategies are:

  • Master and reference data: full refresh or incremental load depending on source support.
  • Open order and inventory snapshot data: full snapshot or incremental load depending on source support.
  • Historical transactional data: one-time historical load followed by incremental daily updates where available.

3.6 Future Considerations

  • Webhook or message-bus based event push.
  • Write-back from Quiver to customer source systems, unless separately agreed.