Before diving into the Lume, it’s important to familiarize yourself with some foundational concepts that will be referenced throughout this documentation.

Generating mapping logic

Mappers

Lume, at its simplest, uses AI to generate mapping logic, or Mapper, between a source schema and a target schema. At scale, you can create hundreds of of these Mappers, so Lume provides ways to organize them via Pipelines.

Pipelines

Pipelines store a target schema, and is where you will be passing in source data to automatically map.

Before executing a Mapper generation, then, you must have a Pipeline with a target schema, and you must provide source data. Lume will then infer the source schema from the source data, and generate the appropriate Mapper. Source data will be provided via a pipeline’s job.

Jobs

To execute a generation, you create a Job. Jobs are a way to track different executions of Mapper generation for a given pipeline. When creating the Job, you attach source data to it, which is the final piece Lume needs to generate the Mapper. You can then run this Job to trigger the generation.

Results

A finished generation of a Mapper is represented by a Result. Since you generated Mappers through a Job, Results are also inherently connected to a specific Job.

Mappings

Results store the the final mapped data, called Mappings. This is the final output data after applying the mapping logic to the provided source data.

Review and editing

Review

For review of the mapped data, you can review Mappings, which is found in a job result. The mapping provides you with the mapped data and the mapping logic.

Workshops

Editing is where the concept of Workshops are introduced. A Workshop represents an editing session. A session is helpful as you may make multiple edit iterations to the Mapper after running a Job. This session bundles a Pipeline and a Mapper to be edited together, as they are intrinsically tied (the Mapper maps data to its Pipeline’s designated target schema).

To make edits, you will create a Workshop for a Job. Workshops are created for each Job, because each Job stores source data, and you must have source data to apply your Mapper changes to review.

After making edits, trigger a run to save the edits. Once you are done making all edits, deploy the changes. Deployment can be done via Deploy Workshop, or via the auto_deploy key in a workshop edit flow. This will update the Mapper in the corresponding pipeline for future jobs. Then, you are done with your Workshop session.

Core Concept Guides

Navigate to the Knowledge Base where more details exists for each core concept.