Pipeline

# Pipeline Introduction

A Pipeline represents a build workflow consisting of one or more Stages that are executed sequentially. For more information, refer to the Stage Introduction.

A basic configuration for a Pipeline is as follows:

name: Pipeline Name
docker:
  image: node
  build: dev/Dockerfile
  volumes:
    - /root/.npm:copy-on-write
git:
  enable: true
  submodules: true
services:
  - docker
env:
  TEST_KEY: TEST_VALUE
imports:
  - https://xxx/envs.yml
  - ./env.txt
label:
  type: MASTER
  class: MAIN
stages:
  - name: stage 1
    script: echo "stage 1"
  - name: stage 2
    script: echo "stage 2"
  - name: stage 3
    script: echo "stage 3"
failStages:
  - name: fail stage 1
    script: echo "fail stage 1"
  - name: fail stage 2
    script: echo "fail stage 2"
endStages:
  - name: end stage 1
    script: echo "end stage 1"
  - name: end stage 2
    script: echo "end stage 2"
ifModify:
  - a.txt
  - "src/**/*"
retry: 3
allowFailure: false

# name

  • type: String

Specifies the name of the pipeline. The default name is pipeline. When multiple pipelines run in parallel, the default names are pipeline, pipeline-1, pipeline-2, and so on. You can define the name to differentiate between different pipelines.

# runner

  • type: Object

Specify build machine-related parameters.

  • tags: Optional. Specifies the tags of the build machine to be used.
  • cpus: Optional. Specifies the number of CPU cores required for the build.

# tags

  • type: String | Array<String>
  • default: cnb:arch:default

Specifies the tags of the build machine to be used.

The following official build machine tags are available in SaaS scenarios:

  1. cnb:arch:amd64 represents an AMD64 architecture build machine.
  2. cnb:arch:arm64:v8 represents an ARM64/v8 architecture build machine.

ps. cnb:arch:default represents default architecture build machine. The SaaS scenario is equivalent to cnb:arch:amd64

Example:

main:
  push:
    - runner:
        tags: cnb:arch:arm64:v8
      stages:
        - name: uname
          script: uname -a

# cpus

  • type: Number

Specifies the maximum number of CPU cores required for the build (memory = CPU cores * 2G), where the CPU and memory do not exceed the actual size of the runner machine.

If not configured, the maximum available number of CPU cores is determined by the configuration of the runner machine to which it is allocated.

Example:

# cpus = 1, memory = 2G
main:
  push:
    - runner:
        cpus: 1
      stages:
        - name: echo
          script: echo "hello world"

# docker

  • type: Object

Specifies the parameters related to Docker. For more details, refer to build-env.

  • image: Specifies the environment image for the current Pipeline. All tasks within this Pipeline will be executed in this image environment.
  • build: Specifies a Dockerfile to build a temporary image to be used as the value for image.
  • volumes: Declares volumes for caching scenarios.

# image

  • type: Object | String

Specifies the environment image for the current Pipeline. All tasks within this Pipeline will be executed in this image environment. It supports referencing environment variables.

  • image.name: string Image name, such as node:20.
  • image.dockerUser: String Specifies the Docker username used to pull the specified image.
  • image.dockerPassword: String Specifies the Docker user password used to pull the specified image.

If image is specified as a string, it is equivalent to specifying image.name.

Example 1, Using a public image:

main:
  push:
    - docker:
        # Use the node:20 image from the official Docker registry as the build container
        image: node:20
      stages:
        - name: show version
          script: node -v

Example 2, Using a private image from the cnb product library:

main:
  push:
    - docker:
        # To use a non-public image as the build container, you need to enter the docker username and password.
        image:
          name: docker.cnb.cool/images/pipeline-env:1.0
          # Environment variables injected by default when using CI build
          dockerUser: $CNB_TOKEN_USER_NAME
          dockerPassword: $CNB_TOKEN
      stages:
        - name: echo
          script: echo "hello world"

Example 3, Using a private image from the official Docker image source:

main:
  push:
    - imports: https://xxx/docker.yml
      docker:
        # To use a non-public image as the build container, you need to enter the docker username and password.
        image:
          name: images/pipeline-env:1.0
          # Environment variables imported in docker.yml
          dockerUser: $DOCKER_USER
          dockerPassword: $DOCKER_PASSWORD
      stages:
        - name: echo
          script: echo "hello world"

docker.yml

DOCKER_USER: user
DOCKER_PASSWORD: password

# build

  • type: Object | String

Specifies a Dockerfile to build a temporary image to be used as the value for image. It supports referencing environment variables.

  • build.dockerfile:

    • type: String Specifies the path to the Dockerfile.
  • build.target:

    • type: String

    Corresponds to the --target parameter in docker build. It selectively builds a specific stage in the Dockerfile instead of building the entire Dockerfile.

  • build.by:

    • type: Array<String> | String

    Declares the list of files that are dependencies during the caching build process.

    Note: Files that are not listed in the by list, except for the Dockerfile, are considered non-existent during the image building process.

    When using String type, multiple files can be separated by commas.

  • build.versionBy:

    • type: Array<String> | String

    Used for version control. When the content of the specified files changes, it is considered a new version. The calculation logic is based on the expression: sha1(dockerfile + versionBy + buildArgs).

    When using String type, multiple files can be separated by commas.

  • build.buildArgs:

    • type: Object

    Inserts additional build arguments during the build process (--build-arg $key=$value). If the value is null, only the key is added (--build-arg $key).

  • build.ignoreBuildArgsInVersion:

    • type: Boolean Specifies whether to ignore buildArgs in version calculation. See versionBy for more details.
  • build.sync:

    • type: String Specifies whether to wait for successful docker push before continuing. The default value is false.

If build is specified as a string, it is equivalent to specifying build.dockerfile.

Usage of Dockerfile:

main:
  push:
    - docker:
        # Specify the build environment using Dockerfile
        build: ./image/Dockerfile
      stages:
        - stage1
        - stage2
        - stage3
main:
  push:
    - docker:
        # Specify the build environment using Dockerfile
        build:
          dockerfile: ./image/Dockerfile
          dockerImageName: cnb.cool/project/images/pipeline-env
          dockerUser: $DOCKER_USER
          dockerPassword: $DOCKER_PASSWORD
      stages:
        - stage1
        - stage2
        - stage3
main:
  push:
    - docker:
        # Specify the build environment using Dockerfile
        build:
          dockerfile: ./image/Dockerfile
          # Build only the builder, not the entire Dockerfile
          target: builder
      stages:
        - stage1
        - stage2
        - stage3

Usage of Dockerfile versionBy:

Example: Cache pnpm in the environment image to speed up the subsequent pnpm i process.

main:
  push:
    - docker:
        build:
          dockerfile: ./Dockerfile
          versionBy:
            - package-lock.json
      stages:
        - name: pnpm i
          script: pnpm i
        - stage1
        - stage2
        - stage3
FROM node:20

# Replace with the actual source
RUN npm config set registry https://xxx.com/npm/ \
  && npm i -g pnpm \
  && pnpm config set store-dir /lib/pnpm

WORKDIR /data/orange-ci/workspace

COPY package.json package-lock.json ./

RUN pnpm i
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 100
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 141
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 272
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 444, done
[pnpm i] 
[pnpm i] dependencies:
[pnpm i] + mocha 8.4.0 (10.0.0 is available)
[pnpm i] 
[pnpm i] devDependencies:
[pnpm i] + babel-eslint 9.0.0 (10.1.0 is available)
[pnpm i] + eslint 5.16.0 (8.23.0 is available)
[pnpm i] + jest 27.5.1 (29.0.2 is available)
[pnpm i] 
[pnpm i] 
[pnpm i] Job finished, duration: 6.8s

# volumes

  • type: Array<String> | String

Declares volumes for caching scenarios. Multiple volumes can be specified as an array or separated by commas in a string. It supports referencing environment variables. The supported formats are:

  1. <group>:<path>:<type>
  2. <path>:<type>
  3. <path>

Meaning of each item:

  • group: Optional. Volume group, used to isolate volumes between different groups.
  • path: Required. Absolute path or relative path (starting with ./) to mount the volume. The path is relative to the workspace.
  • type: Optional. Volume type. The default value is copy-on-write. Supported types are:
    • read-write or rw: Read-write, concurrent write conflicts need to be handled manually. Suitable for serial build scenarios.
    • read-only or ro: Read-only, write operations will throw exceptions.
    • copy-on-write or cow: Read-write, changes (additions, modifications, deletions) are merged after a successful build. Suitable for concurrent build scenarios.
    • copy-on-write-read-only: Read-only, changes (additions, deletions, modifications) are discarded after the build.
    • data: Creates a temporary volume that will be automatically cleaned up when the pipeline ends.

# copy-on-write

copy-on-write is used in caching scenarios and supports concurrency.

copy-on-write technology allows the system to share the same copy of data before any modifications are made, enabling efficient caching replication. In a concurrent environment, this approach avoids conflicts between read and write operations on the cache because private copies of the data are only created when modifications are actually needed. As a result, only write operations incur data copying, while read operations can safely be performed in parallel without concerns about data consistency. This mechanism significantly improves performance, especially in cache scenarios where there are more reads than writes.

# data

data is used for sharing data by making a specified directory within a container available to other containers.

This is achieved by creating a data volume and then mounting it into various containers. The difference from directly mounting a directory from the build machine into a container is that if the specified directory already exists within the container, the existing content will be automatically copied to the data volume before it is mounted. This ensures that the content of the data volume is not directly overwritten by the content of the container directory.

# Volumes Example

Example 1: Mounting directories from the build node into containers to achieve local caching effect

main:
  push:
    - docker:
        image: node:20
        # Declare volumes
        volumes:
          - /data/config:read-only
          - /data/mydata:read-write
          # Use cache and update simultaneously
          - /root/.npm
          # Use main cache and update simultaneously
          - main:/root/.gradle:copy-on-write
      stages:
        - stage1
        - stage2
        - stage3
  pull_request:
    - docker:
        image: node:20
        # Declare volumes
        volumes:
          - /data/config:read-only
          - /data/mydata:read-write
          # Use copy-on-write cache
          - /root/.npm
          - node_modules
          # Use main cache for PR, but do not update
          - main:/root/.gradle:copy-on-write-read-only
      stages:
        - stage1
        - stage2
        - stage3

Example 2: Sharing files packaged within a container with other containers

# .cnb.yml
main:
  push:
    - docker:
        image: go-app-cli # Assuming there is a go application at /go-app/cli in the image
        # Declare volumes
        volumes:
          # This path exists in the go-app-cli image, so when executing the job image, the content of this path will be copied to a temporary data volume, which can be shared with other task containers
          - /go-app
      stages:
        - name: show /go-app-cli in job container
          image: alpine
          script: ls /go-app

# git

  • type: Object

Provide Git repository configuration.

# git.enable

  • type: Boolean
  • default: true

Specifies whether to fetch the code.

For branch.delete event, the default value is false. For other events, the default value is true.

# git.submodules

  • type: Object | Boolean
  • default: true

Specify whether to fetch submodules.

Support specifying specific parameters in the form of an Object, with default values for missing fields:

{
  "enable": true,
  "remote": false
}

Basic usage:

main:
  push:
    - git:
        enable: true
        submodules: true
      stages:
        - name: echo
          script: echo "hello world"
    - git:
        enable: true
        submodules:
          enable: true
          remote: true
      stages:
        - name: echo
          script: echo "hello world"

# git.submodules.enable

Specify whether to fetch submodules (submodules).

# git.submodules.remote

When executing git submodule update, you can add the --remote parameter to fetch the latest code for each submodule every time.

# services

  • type: Array<String>

To declare the services required during the build, use the format: name:[version], where version is optional.

Currently supported services include:

  • docker
  • vscode

# service:docker

To enable the dind service,

Declare it when you need to perform operations like docker build or docker login during the build process. It will automatically inject the docker daemon and docker cli into the environment.

Example:

main:
  push:
    - services:
        - docker
      docker:
        image: alpine
      stages:
        - name: docker info
          script:
            - docker info
            - docker ps

# service:vscode

Declaration for remote development is required.

Example:

$:
  vscode:
    - services:
        - vscode
        - docker
      docker:
        image: alpine
      stages:
        - name: uname
          script: uname -a

# env

  • type: Object

Specify environment variables. You can define a set of environment variables for use during task execution. They are effective for all non-plugin tasks within the current Pipeline.

# imports

  • type: Array<String> | String

By specifying a file path for a specific local repository or another CNB Git repository, you can read that file as a source of environment variables. Local paths, such as env.yml, will be concatenated into remote file addresses for loading.

Typically, a private key repository is used to store credentials such as npm, Docker, and other account passwords.

Note: Ineffective for plugin tasks.

"Cloud Native Build" now supports key repositories, with higher security.

Supported file formats:

  • yaml: All property names under the root path will be exported as environment variables, parsing files with the extension .yml or .yaml.
  • json: All property names at the root level will be exported as environment variables. Files with the .json extension will be parsed.
  • plain: Each line should be in the format key=value. This format will be used for files with extensions other than the ones mentioned above. (Not recommended)

Priority of keys with the same name:

  • When the imports configuration is an array, if there are duplicate parameters, the later configurations will override the earlier ones.
  • If there are duplicates with the env parameter, the parameters in env will override the ones in the imports files.

Variable assignment:

The file paths specified in imports can read environment variables. If it is an array, the file paths below can read variables from the files above.

main:
  push:
    - imports:
        - ./env1.json
        - $FILE
        - https://xxx/xxs.yml
      stages:
        - name: echo
          script: echo $TEST_ENV

For reference on configuration file access control, please see Configuration File Reference Authorization.

Example:

team_name/project_name/* matches all repositories under a project:

key: value

allow_slugs:
  - team_name/project_name/*

Allowing all repositories to reference:

key: value

allow_slugs:
  - "**"

# label

  • type: Object

Specifies labels for the pipeline. Each label can have a string value or an array of strings. These labels can be used for filtering pipeline records and other functionalities.

Here's an example of a workflow: Merge to the main branch triggers a pre-release deployment, and after tagging, it triggers a production deployment.

main:
  push:
    - label:
        # Regular pipeline for the master branch
        type:
          - MASTER
          - PREVIEW
      stages:
        - name: install
          script: npm install
        - name: CCK-lint
          script: npm run lint
        - name: BVT-build
          script: npm run build
        - name: UT-test
          script: npm run test
        - name: pre release
          script: ./pre-release.sh

$:
  tag_push:
    - label:
        # Regular pipeline for product releases
        type: RELEASE
      stages:
        - name: install
          script: npm install
        - name: build
          script: npm run build
        - name: DELIVERY-release
          script: ./release.sh

# stages

  • type: Array<Job>

Defines a set of stage tasks that run sequentially.

# failStages

  • type: Array<Job>

Defines a set of stage tasks to be executed when the normal flow fails. These tasks will be executed sequentially.

# endStages

  • type: Array<Job>

Define a set of tasks to be executed at the end stage of the pipeline. When the pipeline stages/failStages are completed, these stage tasks will be executed in sequence before the pipeline ends.

When the pipeline's prepare stage is successful, endStages will be executed regardless of the success of the stages. Moreover, the success of endStages does not affect the pipeline's status (that is, even if endStages fail, the pipeline's status could still be successful).

Clicking the stop button will not execute the task.

# ifNewBranch

  • type: Boolean
  • default: false

If set to true, the pipeline will only be executed if the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true).

When both ifNewBranch and ifModify are present, if either condition is met, the pipeline will be executed.

# ifModify

  • type: Array<String> | String

To specify that a pipeline should only be executed when the corresponding files change, you can use a glob expression string or an array of strings.

Example 1:

The pipeline will be executed if the modified file list includes a.js or b.js.

ifModify:
  - a.js
  - b.js

Example 2:

The pipeline will be executed if the modified file list includes files with the .js extension. **/*.js matches all .js files in subdirectories, while *.js matches .js files in the root directory.

ifModify:
  - "**/*.js"
  - "*.js"

Example 3:

Reverse matching: exclude the 'legacy' directory and exclude all Markdown files, trigger when there are changes in other files.

ifModify:
  - "**"
  - "!(legacy/**)"
  - "!(**/*.md)"
  - "!*.md"

Example 4:

Reverse matching: trigger the pipeline when there are changes in the src directory, except in the src/legacy directory.

ifModify:
  - "src/**"
  - "!(src/legacy/**)"

# Supported Events

  • For the push event on non-new branches, the changes between before and after are analyzed to determine the modified files.
  • In the pipeline triggered by the push event on non-new branches, the same file change analysis is performed when the cnb:apply event is triggered.
  • For events triggered by a pull request (PR), the modified files within the pull request are analyzed.
  • In events triggered by a pull request (PR) and further triggered by the cnb:apply event, the modified files within the pull request are also analyzed.

Due to the potentially large number of modified files, the limit for file calculation is set to a maximum of 300 files.

# breakIfModify

  • type: Boolean
  • default: false

If set to true, the build will be terminated if the source branch has been updated before the job execution.

# retry

  • type: Number
  • default: 0

Specifies the number of retry attempts for failed jobs. A value of 0 means no retries will be performed.

# allowFailure

  • type: Boolean
  • default: false

Specifies whether the current pipeline is allowed to fail.

When set to true, the failure status of the pipeline will not be reported to CNB.

# lock

  • type: Object | Boolean

Sets a lock for the pipeline. The lock will be automatically released after the pipeline execution. Locks cannot be used across repositories.

Behavior: When pipeline A acquires the lock, if pipeline B requests the lock, it can either terminate pipeline A or wait for pipeline A to finish and release the lock before continuing execution.

  • key:

    • type: String

    Custom lock name. By default, it is set to branchName-pipelineName, which means the lock scope is the current pipeline.

  • expires:

    • type: Number
    • default: 3600 (1 hour)

    Lock expiration time in seconds. After expiration, the lock will be automatically released.

  • timeout:

    • type: Number
    • default: 3600 (1 hour)

    Timeout in seconds for waiting for the lock.

  • cancel-in-progress:

    • type: Boolean
    • default: false

    Whether to terminate the pipeline that currently holds the lock or is waiting for the lock, allowing the current pipeline to acquire the lock and execute.

  • wait:

    • type: Boolean
    • default: false

    Whether to wait if the lock is already acquired (without consuming pipeline resources and time). If set to false, an error will be thrown directly. Cannot be used together with cancel-in-progress.

  • cancel-in-wait:

    • type: Boolean
    • default: false

    Whether to terminate the pipeline that is waiting for the lock and add the current pipeline to the waiting lock queue. Need to be used with the wait attribute.

Example 1: Lock as a Boolean value

main:
  push:
    - lock: true
      stages:
        - name: stage1
          script: echo "stage1"

Example 2: Lock as an Object

main:
  push:
    - lock:
        key: key
        expires: 600 # 10 minutes
        wait: true
        timeout: 60 # maximum wait time of 1 minute
      stages:
        - name: stage1
          script: echo "stage1"

Example 3: Stop the previous running pipeline under a pull_request

main:
  pull_request:
    - lock:
        key: pr
        cancel-in-progress: true
      stages:
        - name: echo hello
          script: echo "stage1"