Skip to content

Core

The gcix is a Typescript library designed to facilitate the creation of dynamic pipelines for GitLab CI.

Additionally, it provides examples in both Typescript and Python.

Please select the language in which you want to see the examples.

Configuring your project to use gcix

Your GitLab project requires the following two files:

1
2
3
MyProject
├ .gitlab-ci.(ts|py)
└ .gitlab-ci.yml

The .gitlab-ci.yml file is the one you are familiar with, responsible for rendering and triggering the child pipeline created with gcix. The latter is written into the .gitlab-ci.ts file.

Now, let's examine how the .gitlab-ci.yml file should be structured for this project:

---
generate-pipeline:
  stage: build
  image: node:20
  script:
    - npx projen install:ci
    - npx ts-node .gitlab-ci.ts
  artifacts:
    paths:
      - generated-config.yml
  tags:
    - gcix

run-pipeline:
  stage: deploy
  needs:
    - generate-pipeline
  trigger:
    include:
      - artifact: generated-config.yml
        job: generate-pipeline
    strategy: depend

The pipeline code for gcix is written in the file named .gitlab-ci.ts. The upcoming sections demonstrate how to create this pipeline code.

Alternatively, instead of installing gcix in a Node container, you have the option to utilize the official Docker image that is released for each tag.

In this case, the initial job would appear as follows:

1
2
3
4
5
6
7
generate-pipeline:
  stage: build
  image: gcix/gcix:1.0.0
  script: /usr/src/app/docker/gcip.sh
  artifacts:
    paths:
      - generated-config.yml

Hints regarding the following examples

All the code examples in the upcoming chapters are designed to be compatible with [Jest][1] or [pytest][2].

As an example, a code snippet could be as follows:

import gcix
import { check } from "../../comparison"

test("test", () => {
  const pipeline = new gcix.Pipeline()
  pipeline.addChildren({
    job_or_job_collections: [
      new gcix.Job({
        stage: "print_date",
        script: "date"
      }
    )]
  });
  check(pipeline.render(), expect)
});

To transform this Jest test into a valid .gitlab-ci.ts file, you need to:

  • Remove the import statement: import { check } from "../../comparison".
  • Place your pipeline code directly in the gitlab-ci.ts, outside the test("test", () => {}) function.
  • Instead of testing the rendered pipeline with check(pipeline.render(), expect), you should write the generated-pipeline.yml with pipeline.writeYaml().

The resulting .gitlab-ci.ts code derived from the example would look like the following:

1
2
3
4
5
6
7
8
import gcip
from tests import conftest

def test():
    pipeline = gcip.Pipeline()
    pipeline.add_children(jobs_or_job_collections: [gcip.Job(stage="print_date", script="date")])

    conftest.check(pipeline.render())

To transform this [Pytest][2] code into a valid .gitlab-ci.oy file, you need to:

  • Remove the import statement: from tests import conftest..
  • Place your pipeline code directly in the gitlab-ci.py, outside the def test(): function.
  • Instead of testing the rendered pipeline with conftest.check(pipeline.render()), you should write the generated-pipeline.yml with pipeline.write_yaml().

The resulting .gitlab-ci.(ts|py) file, derived from the example, would look like the following:

import gcix

pipeline = new gcix.Pipeline()
pipeline.addChildren({
  jobsOrJobCollections: [
    new gcix.Job(stage="print_date", script="date")
  ]
});

pipeline.writeYaml()
1
2
3
4
5
6
import gcix

pipeline = gcix.Pipeline()
pipeline.add_children(jobs_or_job_collection=[gcix.Job(stage="print_date", script="date")])

pipeline.write_yaml()

Create a pipeline with one job

Input:

import { Pipeline, Job } from "../../../src";
import { check } from "../../comparison";

test("test", () => {
  const pipeline = new Pipeline();
  pipeline.addChildren({
    jobsOrJobCollections: [new Job({ stage: "print_date", scripts: ["date"] })],
  });
  check(pipeline.render(), expect);
});
Keep in mind that, as mentioned in the Hints regarding the following examples, your pipeline code should conclude with pipeline.writeYaml().

1
2
3
4
5
6
7
8
9
import gcip
from tests import conftest


def test():
    pipeline = gcip.Pipeline()
    pipeline.add_children(gcip.Job(stage="print_date", script="date"))

    conftest.check(pipeline.render())
Keep in mind that, as mentioned in the Hints regarding the following examples, your pipeline code should conclude with pipeline.write_yaml().

Output:

1
2
3
4
5
6
stages:
- print_date
print-date:
  stage: print_date
  script:
  - date

Configure jobs

To configure jobs, you can utilize the following methods:

Input:

import * as gcix from "../../../src";
import { check } from "../../comparison";

test("test", () => {
  const pipeline = new gcix.Pipeline();

  const job = new gcix.Job({ stage: "print_date", scripts: ["date"] });
  job.assignImage("docker/image:example");
  job.prependScripts(["./before-script.sh"]);
  job.appendScripts(["./after-script.sh"]);
  job.addVariables({ USER: "Max Power", URL: "https://example.com" });
  job.addTags(["test", "europe"]);
  job.assignArtifacts(new gcix.Artifacts({ paths: ["binaries/", ".config"] }));
  job.appendRules([new gcix.Rule({ ifStatement: "$MY_VARIABLE_IS_PRESENT" })]);

  pipeline.addChildren({ jobsOrJobCollections: [job] });

  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def test():
    pipeline = gcip.Pipeline()

    job = gcip.Job(stage="print_date", script="date")
    job.set_image("docker/image:example")
    job.prepend_scripts("./before-script.sh")
    job.append_scripts("./after-script.sh")
    job.add_variables(USER="Max Power", URL="https://example.com")
    job.add_tags("test", "europe")
    job.artifacts.add_paths("binaries/", ".config")
    job.append_rules(gcip.Rule(if_statement="$MY_VARIABLE_IS_PRESENT"))

    pipeline.add_children(job)

    conftest.check(pipeline.render())

Output:

stages:
- print_date
print-date:
  image:
    name: docker/image:example
  stage: print_date
  script:
  - ./before-script.sh
  - date
  - ./after-script.sh
  variables:
    USER: Max Power
    URL: https://example.com
  rules:
  - if: $MY_VARIABLE_IS_PRESENT
    when: on_success
    allow_failure: false
  artifacts:
    name: ci_job_name-my-awsome-feature-branch
    paths:
    - binaries/
    - .config
  tags:
  - test
  - europe

Bundling jobs as collections

The class JobCollection allows you to group jobs together to apply a common configuration to all included jobs. This collection shares the same configuration methods as demonstrated in the previous example for individual jobs.

Input:

import * as gcix from "../../../src";
import { check } from "../../comparison";

test("test", () => {
  const collection = new gcix.JobCollection();

  const job1 = new gcix.Job({ stage: "job1", scripts: ["script1.sh"] });
  job1.prependScripts(["from-job-1.sh"]);

  collection.addChildren({
    jobsOrJobCollections: [
      job1,
      new gcix.Job({ stage: "job2", scripts: ["script2.sh"] }),
    ],
  });
  collection.prependScripts(["from-sequence.sh"]);

  const pipeline = new gcix.Pipeline();
  pipeline.addChildren({ jobsOrJobCollections: [collection] });

  check(pipeline.render(), expect);
});
As evident from the output, jobs can have their own configurations (indicated by job1.prependScripts([...])), and they can also inherit common configurations from their collection (indicated by collection.prependScripts([...])).

import gcip
from tests import conftest


def test():
    sequence = gcip.Sequence()

    job1 = gcip.Job(stage="job1", script="script1.sh")
    job1.prepend_scripts("from-job-1.sh")

    sequence.add_children(
        job1,
        gcip.Job(stage="job2", script="script2.sh"),
    )

    sequence.prepend_scripts("from-sequence.sh")

    pipeline = gcip.Pipeline()
    pipeline.add_children(sequence)

    conftest.check(pipeline.render())

As evident from the output, jobs can have their own configurations (indicated by job1.prepend_scripts([...])), and they can also inherit common configurations from their collection (indicated by collection.prepend_scripts([...])).

Output:

stages:
- job1
- job2
job1:
  stage: job1
  script:
  - from-sequence.sh
  - from-job-1.sh
  - script1.sh
job2:
  stage: job2
  script:
  - from-sequence.sh
  - script2.sh

Stacking collections

Input:

import { Pipeline, Job, JobCollection } from "../../../src";
import { check } from "../../comparison";

test("test", () => {
  const collectionA = new JobCollection();
  collectionA.addChildren({
    jobsOrJobCollections: [new Job({ stage: "job1", scripts: ["script1.sh"] })],
  });
  collectionA.prependScripts(["from-sequence-a.sh"]);

  const collectionB = new JobCollection();
  collectionB.addChildren({ jobsOrJobCollections: [collectionA] });
  collectionB.addChildren({
    jobsOrJobCollections: [new Job({ stage: "job2", scripts: ["script2.sh"] })],
  });
  collectionB.prependScripts(["from-sequence-b.sh"]);

  const pipeline = new Pipeline();
  pipeline.addChildren({ jobsOrJobCollections: [collectionB] });

  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def test():
    sequence_a = gcip.Sequence()
    sequence_a.add_children(gcip.Job(stage="job1", script="script1.sh"))
    sequence_a.prepend_scripts("from-sequence-a.sh")

    sequence_b = gcip.Sequence()
    sequence_b.add_children(sequence_a)
    sequence_b.add_children(gcip.Job(stage="job2", script="script2.sh"))
    sequence_b.prepend_scripts("from-sequence-b.sh")

    pipeline = gcip.Pipeline()
    pipeline.add_children(sequence_b)

    conftest.check(pipeline.render())

Output:

stages:
- job1
- job2
job1:
  stage: job1
  script:
  - from-sequence-b.sh
  - from-sequence-a.sh
  - script1.sh
job2:
  stage: job2
  script:
  - from-sequence-b.sh
  - script2.sh

Pipelines are collections

Pipelines are an expanded version of a JobCollection and include all of its capabilities (in addition to pipeline-specific abilities). This includes configuration options and the ability to stack other collections within them.

Input:

import { Pipeline, Job, JobCollection } from "../../../src";
import { check } from "../../comparison";

test("test", () => {
  const sequence_a = new JobCollection();
  sequence_a.addChildren({
    jobsOrJobCollections: [new Job({ stage: "job1", scripts: ["script1.sh"] })],
  });
  sequence_a.prependScripts(["from-sequence.sh"]);

  const pipeline = new Pipeline();
  pipeline.addChildren({ jobsOrJobCollections: [sequence_a] });
  pipeline.addChildren({
    jobsOrJobCollections: [new Job({ stage: "job2", scripts: ["script2.sh"] })],
  });
  pipeline.prependScripts(["from-pipeline.sh"]);

  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def test():
    sequence_a = gcip.Sequence()
    sequence_a.add_children(gcip.Job(stage="job1", script="script1.sh"))
    sequence_a.prepend_scripts("from-sequence.sh")

    pipeline = gcip.Pipeline()
    pipeline.add_children(sequence_a)
    pipeline.add_children(gcip.Job(stage="job2", script="script2.sh"))
    pipeline.prepend_scripts("from-pipeline.sh")

    conftest.check(pipeline.render())

Output:

stages:
- job1
- job2
job1:
  stage: job1
  script:
  - from-pipeline.sh
  - from-sequence.sh
  - script1.sh
job2:
  stage: job2
  script:
  - from-pipeline.sh
  - script2.sh

Stages allow reuse of jobs and collections

Suppose you intend to reuse a parameterized job. The following code illustrates an incorrect example:

import { Job, Pipeline } from "../../../src";

export function jobFor(environment: string): Job {
  return new Job({
    stage: "do_something",
    scripts: [`./do-something-on.sh ${environment}`],
  });
}

test("test", () => {
  const pipeline = new Pipeline();
  for (const env of ["development", "test"]) {
    pipeline.addChildren({ jobsOrJobCollections: [jobFor(env)] });
  }
  expect(() => {
    pipeline.render();
  }).toThrowError(/Two jobs have the same name/);
});
import pytest

from gcip import Job, Pipeline, JobNameConflictError


def job_for(environment: str) -> Job:
    return Job(stage="do_something", script=f"./do-something-on.sh {environment}")


def test():
    pipeline = Pipeline()
    for env in ["development", "test"]:
        pipeline.add_children(job_for(env))

    with pytest.raises(JobNameConflictError):
        pipeline.render()

When rendering this pipeline, it results in an error.

Error: Two jobs have the same name 'do-something' when rendering the pipeline
Please fix this by providing a different name and/or stage when adding those jobs to their collections/pipeline.

The error arises because both jobs were added with the same name to the pipeline, causing the second job to overwrite the first one.

To avoid such conflicts, when adding jobs or collections to a collections, you should use the .addChildren() method, which accepts the stage property. You can utilize this property to modify the name of the jobs added. The value of stage will be appended to the jobs' name and stage. However, please note that this modification only applies to the jobs or collections added at that moment and not to the jobs and collections already present within the collection.

Reuse jobs

Input:

import { Pipeline, Job } from "../../../src";
import { check } from "../../comparison";

export function jobFor(environment: string): Job {
  return new Job({
    stage: "do_something",
    scripts: [`./do-something-on.sh ${environment}`],
  });
}

test("test", () => {
  const pipeline = new Pipeline();
  for (const env of ["development", "test"]) {
    pipeline.addChildren({ jobsOrJobCollections: [jobFor(env)], stage: env });
  }

  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def job_for(environment: str) -> gcip.Job:
    return gcip.Job(stage="do_something", script=f"./do-something-on.sh {environment}")


def test():
    pipeline = gcip.Pipeline()
    for env in ["development", "test"]:
        pipeline.add_children(job_for(env), stage=env)

    conftest.check(pipeline.render())

The error occurred because we added both jobs to the collection with different stage values. By doing so, in the output, we correctly populate one job per environment, ensuring that each job is appropriately associated with its respective environment.

Output:

stages:
- do_something_development
- do_something_test
development-do-something:
  stage: do_something_development
  script:
  - ./do-something-on.sh development
test-do-something:
  stage: do_something_test
  script:
  - ./do-something-on.sh test

Reuse collections

Namespacing significantly enhances the reusability of collections. You can encapsulate an entire GitLab CI pipeline within a collection and then reuse that collection for each environment. By repeating the collection within a loop for all environments, namespacing ensures that all jobs of the collection are populated uniquely for each environment, enabling efficient configuration management and deployment.

Input:

import { Pipeline, JobCollection, Job } from "../../../src";
import { check } from "../../comparison";

export function environmentPipeline(environment: string): JobCollection {
  const collection = new JobCollection();
  collection.addChildren({
    jobsOrJobCollections: [
      new Job({ stage: "job1", scripts: [`job-1-on-${environment}`] }),
      new Job({ stage: "job2", scripts: [`job-2-on-${environment}`] }),
    ],
  });
  return collection;
}

test("test", () => {
  const pipeline = new Pipeline();
  for (const env of ["development", "test"]) {
    pipeline.addChildren({
      jobsOrJobCollections: [environmentPipeline(env)],
      stage: env,
    });
  }
  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def environment_pipeline(environment: str) -> gcip.Sequence:
    sequence = gcip.Sequence()
    sequence.add_children(
        gcip.Job(stage="job1", script=f"job-1-on-{environment}"),
        gcip.Job(stage="job2", script=f"job-2-on-{environment}"),
    )
    return sequence


def test():
    pipeline = gcip.Pipeline()
    for env in ["development", "test"]:
        pipeline.add_children(environment_pipeline(env), stage=env)

    conftest.check(pipeline.render())

Output:

stages:
- job1_development
- job2_development
- job1_test
- job2_test
development-job1:
  stage: job1_development
  script:
  - job-1-on-development
development-job2:
  stage: job2_development
  script:
  - job-2-on-development
test-job1:
  stage: job1_test
  script:
  - job-1-on-test
test-job2:
  stage: job2_test
  script:
  - job-2-on-test

Parallelization - name, stage

As evident from the previous examples, all jobs possess a distinct stage, causing them to run within collections. This behavior occurs because the stage property always extends the job's name and stage. This principle applies universally to all stage properties, be it for the constructor of a Job object or the .add_*() methods of a collection.

When adding jobs to a collection, whether directly or within another collection, the objective is to merely extend the name of the jobs, leaving their stage unchanged. This approach ensures that jobs with equal stages can run in parallel.

To achieve this, you can set identical values for the stage property while providing different values for the name property when creating jobs or adding them to collections. By doing so, the name property will extend only the name of a job without affecting its stage.

name property when creating jobs

Input:

import { Job, Pipeline } from "../../../src";
import { check } from "../../comparison";

test("test", () => {
  const pipeline = new Pipeline();
  pipeline.addChildren({
    jobsOrJobCollections: [
      new Job({ name: "job1", stage: "single-stage", scripts: ["date"] }),
      new Job({ name: "job2", stage: "single-stage", scripts: ["date"] }),
    ],
  });

  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def test():
    pipeline = gcip.Pipeline()
    pipeline.add_children(
        gcip.Job(name="job1", stage="single-stage", script="date"),
        gcip.Job(name="job2", stage="single-stage", script="date"),
    )

    conftest.check(pipeline.render())

Output:

stages:
- single_stage
job1-single-stage:
  stage: single_stage
  script:
  - date
job2-single-stage:
  stage: single_stage
  script:
  - date

In this scenario, we have chosen an equal value for the stage parameter, ensuring that both jobs have the same stage. To prevent their name values from being identical (and risking the second job overwriting the first one), we have also provided the name property. The name property's value will be appended to the existing name of the jobs. Consequently, both jobs will run in parallel within the same stage.

You might wonder why there is no dedicated stage property. When considering collections, the stage property would extend both the name and stage of a job, while the name property would only extend the name of a job. Extending means appending values to the current name or stage values of a job. However, there's no practical reason to solely extend the stage of a job so that two jobs have distinct stages but unique names. In GitLab CI, a job must have a unique name, so extending just the stage wouldn't serve any purpose. Therefore, the consistent concept of using only the name and stage properties applies to both jobs and collections.

As for not omitting the stage property when creating the jobs, it is because of the explanation in the previous paragraph. When creating jobs, we cannot directly set the stage value. Omitting the stage property means leaving it unset, which would default the GitLab CI jobs to the test stage. To define a stage other than test, we used the stage property. Yes, this implies that the job's name will include the value of the stage. However, this design decision clarifies the concept of name and stage more effectively than providing a stage property for jobs, especially when collections lack such a (superfluous) stage property.

No worries! Here's a simple guide to keep in mind when creating Jobs:

  1. For distinct jobs that will run in separate stages within a collection, set different values only for the stage property.
  2. For distinct jobs that will run in parallel with equal stages, set different values only for the name property.
  3. For distinct jobs that will run in parallel with equal stages and a defined stage name, set different values for the name properties but equal values for the stage properties.
  4. Setting different values for both properties is not advisable and will result in the first scenario of distinct jobs running in separate stages within a collection.

name parameter when adding jobs (and collections) to collections

Let's consider the collection example from the chapter Stages allow reuse of jobs and collections. Instead of using the stage parameter when adding the collection multiple times to the pipeline, we will now utilize the name parameter.

Input:

import { Job, JobCollection, Pipeline } from "../../../src";
import { check } from "../../comparison";

export function environmentPipeline(environment: string) {
  const collection = new JobCollection();
  collection.addChildren({
    jobsOrJobCollections: [
      new Job({ stage: "job1", scripts: [`job-1-on-${environment}`] }),
      new Job({ stage: "job2", scripts: [`job-2-on-${environment}`] }),
    ],
  });
  return collection;
}

test("test", () => {
  const pipeline = new Pipeline();
  for (const env of ["development", "test"]) {
    pipeline.addChildren({
      jobsOrJobCollections: [environmentPipeline(env)],
      name: env,
    });
  }

  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def environment_pipeline(environment: str) -> gcip.Sequence:
    sequence = gcip.Sequence()
    sequence.add_children(
        gcip.Job(stage="job1", script=f"job-1-on-{environment}"),
        gcip.Job(stage="job2", script=f"job-2-on-{environment}"),
    )
    return sequence


def test():
    pipeline = gcip.Pipeline()
    for env in ["development", "test"]:
        pipeline.add_children(environment_pipeline(env), name=env)

    conftest.check(pipeline.render())

Now the environments run in parallel, because just the job names are populated per environment but not the stage names.

Output:

stages:
- job1
- job2
development-job1:
  stage: job1
  script:
  - job-1-on-development
development-job2:
  stage: job2
  script:
  - job-2-on-development
test-job1:
  stage: job1
  script:
  - job-1-on-test
test-job2:
  stage: job2
  script:
  - job-2-on-test

You can also combine the usage of stage and name when adding jobs. This approach is particularly useful when dealing with a large number of jobs, where some groups of jobs should run sequentially while jobs within each group should run in parallel. Here's an example to illustrate this scenario:

Input:

import { Job, Pipeline } from "../../../src";
import { check } from "../../comparison";

export function jobFor(service: string): Job {
  return new Job({
    stage: "update_service",
    scripts: [`./update-service.sh ${service}`],
  });
}

test("test", () => {
  const pipeline = new Pipeline();
  for (const env of ["development", "test"]) {
    for (const service of ["service1", "service2"]) {
      pipeline.addChildren({
        jobsOrJobCollections: [jobFor(`${service}_${env}`)],
        stage: env,
        name: service,
      });
    }
  }
  check(pipeline.render(), expect);
});
import gcip
from tests import conftest


def job_for(service: str) -> gcip.Job:
    return gcip.Job(stage="update_service", script=f"./update-service.sh {service}")


def test():
    pipeline = gcip.Pipeline()
    for env in ["development", "test"]:
        for service in ["service1", "service2"]:
            pipeline.add_children(job_for(f"{service}_{env}"), stage=env, name=service)

    conftest.check(pipeline.render())

The output shows that two services are being updated in parallel but within consecutive stages.

Output:

stages:
- update_service_development
- update_service_test
service1-development-update-service:
  stage: update_service_development
  script:
  - ./update-service.sh service1_development
service2-development-update-service:
  stage: update_service_development
  script:
  - ./update-service.sh service2_development
service1-test-update-service:
  stage: update_service_test
  script:
  - ./update-service.sh service1_test
service2-test-update-service:
  stage: update_service_test
  script:
  - ./update-service.sh service2_test