Pipeline
# Pipeline Introduction
A Pipeline
represents a build workflow consisting of one or more Stages
that are executed sequentially.
For more information, refer to the Stage Introduction.
A basic configuration for a Pipeline
is as follows:
name: Pipeline Name
docker:
image: node
build: dev/Dockerfile
volumes:
- /root/.npm:copy-on-write
git:
enable: true
submodules: true
lfs: true
services:
- docker
env:
TEST_KEY: TEST_VALUE
imports:
- https://xxx/envs.yml
- ./env.txt
label:
type: MASTER
class: MAIN
stages:
- name: stage 1
script: echo "stage 1"
- name: stage 2
script: echo "stage 2"
- name: stage 3
script: echo "stage 3"
failStages:
- name: fail stage 1
script: echo "fail stage 1"
- name: fail stage 2
script: echo "fail stage 2"
endStages:
- name: end stage 1
script: echo "end stage 1"
- name: end stage 2
script: echo "end stage 2"
ifModify:
- a.txt
- "src/**/*"
retry: 3
allowFailure: false
# name
- type:
String
Specifies the name of the pipeline. The default name is pipeline
. When multiple pipelines run in parallel,
the default names are pipeline
, pipeline-1
, pipeline-2
, and so on.
You can define the name
to differentiate between different pipelines.
# runner
- type:
Object
Specify build machine-related parameters.
tags
: Optional. Specifies the tags of the build machine to be used.cpus
: Optional. Specifies the number of CPU cores required for the build.
# tags
- type:
String
|Array<String>
- default:
cnb:arch:default
Specifies the tags of the build machine to be used.
The following official build machine tags are available in SaaS scenarios:
cnb:arch:amd64
represents an AMD64 architecture build machine.cnb:arch:arm64:v8
represents an ARM64/v8 architecture build machine.
Note: cnb:arch:default
represents the default architecture build machine. In SaaS scenarios, this is equivalent to cnb:arch:amd64
.
Example:
main:
push:
- runner:
tags: cnb:arch:arm64:v8
stages:
- name: uname
script: uname -a
# cpus
- type:
Number
Specifies the maximum number of CPU cores required for the build (memory = CPU cores * 2G), where the CPU and memory do not exceed the actual capacity of the runner machine.
If not configured, the maximum CPU cores available are determined by the runner machine's configuration.
Example:
# cpus = 1, memory = 2G
main:
push:
- runner:
cpus: 1
stages:
- name: echo
script: echo "hello world"
# docker
- type:
Object
Specifies the parameters related to Docker. For more details, refer to build-env.
image
: Specifies the environment image for the currentPipeline
. All tasks within thisPipeline
will be executed in this image environment.build
: Specifies aDockerfile
to build a temporary image to be used as the value forimage
.volumes
: Declares volumes for caching scenarios.
# image
- type:
Object
|String
Specifies the environment image for the current Pipeline
.
All tasks within this Pipeline
will be executed in this image environment.
It supports referencing environment variables.
image.name
:string
Image name, such asnode:20
.image.dockerUser
:String
Specifies the Docker username used to pull the specified image.image.dockerPassword
:String
Specifies the Docker user password used to pull the specified image.
If image
is specified as a string, it is equivalent to specifying image.name
.
Example 1: Using a public image.
main:
push:
- docker:
# Use the node:20 image from the official Docker registry as the image for the build container.
image: node:20
stages:
- name: show version
script: node -v
Example 2: Using a private image from the CNB product library.
main:
push:
- docker:
# To use a private image as the build container image, need provide the Docker username and password.
image:
name: docker.cnb.cool/images/pipeline-env:1.0
# Environment variables injected by default when using CI build
dockerUser: $CNB_TOKEN_USER_NAME
dockerPassword: $CNB_TOKEN
stages:
- name: echo
script: echo "hello world"
Example 3, Using a private image from the official Docker image source:
main:
push:
- imports: https://xxx/docker.yml
docker:
# To use a non-public image as the build container, you need to enter the docker username and password.
image:
name: images/pipeline-env:1.0
# Environment variables imported in docker.yml
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- name: echo
script: echo "hello world"
docker.yml
DOCKER_USER: user
DOCKER_PASSWORD: password
# build
- type:
Object
|String
Specifies a Dockerfile
to build a temporary image to be used as the value for image
.
It supports referencing environment variables.
build.dockerfile
:- type:
String
Specifies the path to theDockerfile
.
- type:
build.target
:- type:
String
Corresponds to the
--target
parameter indocker build
. It selectively builds a specific stage in the Dockerfile instead of building the entire Dockerfile.- type:
build.by
:- type:
Array<String>
|String
Declares the list of files that are dependencies during the caching build process.
Note: Files that are not listed in the
by
list, except for the Dockerfile, are considered non-existent during the image building process.If the type is
String
, multiple files should be separated by commas.- type:
build.versionBy
:- type:
Array<String>
|String
Used for version control. When the content of the specified files changes, it is considered a new version. The calculation logic is based on the expression: sha1(dockerfile + versionBy + buildArgs).
When using
String
type, multiple files can be separated by commas.- type:
build.buildArgs
:- type:
Object
Inserts additional build arguments during the build process (
--build-arg $key=$value
). If the value is null, only the key is added (--build-arg $key
).- type:
build.ignoreBuildArgsInVersion
:- type:
Boolean
Specifies whether to ignorebuildArgs
in version calculation. SeeversionBy
for more details.
- type:
build.sync
:- type:
String
Specifies whether to wait for successfuldocker push
before continuing. The default value isfalse
.
- type:
If build
is specified as a string, it is equivalent to specifying build.dockerfile
.
Usage of Dockerfile:
main:
push:
- docker:
# Specify the build environment using Dockerfile
build: ./image/Dockerfile
stages:
- stage1
- stage2
- stage3
main:
push:
- docker:
# Specify the build environment using Dockerfile
build:
dockerfile: ./image/Dockerfile
dockerImageName: cnb.cool/project/images/pipeline-env
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- stage1
- stage2
- stage3
main:
push:
- docker:
# Specify the build environment using Dockerfile
build:
dockerfile: ./image/Dockerfile
# Build only the builder, not the entire Dockerfile
target: builder
stages:
- stage1
- stage2
- stage3
Usage of Dockerfile versionBy:
Example: Cache pnpm in the environment image to optimize subsequent pnpm install operations.
main:
push:
- docker:
build:
dockerfile: ./Dockerfile
versionBy:
- package-lock.json
stages:
- name: pnpm i
script: pnpm i
- stage1
- stage2
- stage3
FROM node:20
# Replace with the actual source
RUN npm config set registry https://xxx.com/npm/ \
&& npm i -g pnpm \
&& pnpm config set store-dir /lib/pnpm
WORKDIR /data/orange-ci/workspace
COPY package.json package-lock.json ./
RUN pnpm i
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 100
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 141
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 272
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 444, done
[pnpm i]
[pnpm i] dependencies:
[pnpm i] + mocha 8.4.0 (10.0.0 is available)
[pnpm i]
[pnpm i] devDependencies:
[pnpm i] + babel-eslint 9.0.0 (10.1.0 is available)
[pnpm i] + eslint 5.16.0 (8.23.0 is available)
[pnpm i] + jest 27.5.1 (29.0.2 is available)
[pnpm i]
[pnpm i]
[pnpm i] Job finished, duration: 6.8s
# volumes
- type:
Array<String>
|String
Declares volumes for caching scenarios. Multiple volumes can be specified as an array or separated by commas in a string. It supports referencing environment variables. The supported formats are:
<group>:<path>:<type>
<path>:<type>
<path>
Meaning of each item:
group
: Optional. Specifies the volume group, groups are isolated from each other.path
: Required. Absolute path or relative path (starting with./
) to mount the volume. The path is relative to the workspace.type
: Optional. Volume type. The default value iscopy-on-write
. Supported types are:read-write
orrw
: Read-write, concurrent write conflicts need to be handled manually. Suitable for serial build scenarios.read-only
orro
: Read-only, write operations will throw exceptions.copy-on-write
orcow
: Read-write, changes (additions, modifications, deletions) are merged after a successful build. Suitable for concurrent build scenarios.copy-on-write-read-only
: Read-only, changes (additions, deletions, modifications) are discarded after the build.data
: Creates a temporary volume that will be automatically cleaned up when the pipeline ends.
# copy-on-write
copy-on-write
is used in caching scenarios and supports concurrency.
copy-on-write
technology allows the system to share the same copy of data before any modifications are made,
enabling efficient caching replication. In a concurrent environment,
this approach avoids conflicts between read and write operations on the cache because private copies of the data
are only created when modifications are actually needed.
As a result, only write operations incur data copying, while read operations can safely be performed
in parallel without concerns about data consistency.
This mechanism significantly improves performance, especially in cache scenarios where there are more reads than writes.
# data
data
is used for sharing data by making a specified directory within a container available to other containers.
This is achieved by creating a data volume and then mounting it into various containers. The difference from directly mounting a directory from the build machine into a container is that if the specified directory already exists within the container, the existing content will be automatically copied to the data volume before it is mounted. This ensures that the content of the data volume is not directly overwritten by the content of the container directory.
# Volumes Example
Example 1: Mounting directories from the build node into containers to Implement local caching capabilities.
main:
push:
- docker:
image: node:20
# Declare volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use cache and update simultaneously
- /root/.npm
# Use main cache and update simultaneously
- main:/root/.gradle:copy-on-write
stages:
- stage1
- stage2
- stage3
pull_request:
- docker:
image: node:20
# Declare volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use copy-on-write cache
- /root/.npm
- node_modules
# Use main cache for PR, but do not update
- main:/root/.gradle:copy-on-write-read-only
stages:
- stage1
- stage2
- stage3
Example 2: Sharing files packaged inside a container with other containers.
# .cnb.yml
main:
push:
- docker:
# Assume there is a Go application located at the /go-app/cli path inside the image.
image: go-app-cli
# Declare volumes
volumes:
# This path exists in the go-app-cli image, so when executing the job image,
# the content of this path will be copied to a temporary data volume,
# which can be shared with other task containers
- /go-app
stages:
- name: show /go-app-cli in job container
image: alpine
script: ls /go-app
# git
- type:
Object
Provide Git repository-related configurations.
# git.enable
- type:
Boolean
- default:
true
Specify whether to fetch the code.
For branch.delete
event, the default value is false
. For other events, the default value is true
.
# git.submodules
- type:
Object
|Boolean
- default:
true
Specify whether to fetch submodules.
Support specifying specific parameters in the form of an Object
, with default values for missing fields:
{
"enable": true,
"remote": false
}
Basic usage:
main:
push:
- git:
enable: true
submodules: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
submodules:
enable: true
remote: true
stages:
- name: echo
script: echo "hello world"
# git.submodules.enable
Specify whether to fetch submodules (submodules
).
# git.submodules.remote
When executing git submodule update
, you can add the --remote
parameter
to fetch the latest code for each submodule every time.
# git.lfs
- type:
Object
|Boolean
- default:
true
Specifies whether to fetch LFS files.
Supports specifying specific parameters in the form of an Object
, with default values when fields are omitted:
{
"enable": true
}
Basic Usage:
main:
push:
- git:
enable: true
lfs: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
lfs:
enable: true
stages:
- name: echo
script: echo "hello world"
# git.lfs.enable
Specifies whether to fetch LFS files.
# services
- type:
Array<String>
To declare the services required during the build, use the format: name:[version]
, where version
is optional.
Currently supported services include:
- docker
- vscode
# service:docker
To enable the dind
service,
Declare it when you need to perform operations like docker build
or docker login
during the build process.
It will automatically inject the docker daemon
and docker cli
into the environment.
Example:
main:
push:
- services:
- docker
docker:
image: alpine
stages:
- name: docker info
script:
- docker info
- docker ps
This service will automatically docker login
to the image registry of the CNB repository(docker.cnb.cool).
In subsequent tasks, you can directly docker push
to the image registry of the current repository.
Example:
main:
push:
- services:
- docker
stages:
- name: build and push
script: |
# A Dockerfile file exists in the root directory.
docker build -t ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
# service:vscode
Declare it when remote development is required.
Example:
$:
vscode:
- services:
- vscode
- docker
docker:
image: alpine
stages:
- name: uname
script: uname -a
# env
- type:
Object
Set up environment variables for your tasks. These variables work for all non-plugin tasks in the current Pipeline.
# imports
- type:
Array<String>
|String
Specify a file path, which can belong to the current repository or another CNB git repository.
The file's content can be used as the source of environment variables.
Local paths like ./env.yml
will be concatenated into a remote file address for loading.
We typically use private repositories to store credentials for accounts like npm and Docker.
Example:
#env.yml
DOCKER_USER: "username"
DOCKER_TOKEN: "token"
DOCKER_REGISTRY: "https://xxx/xxx"
#.cnb.yml
main:
push:
- services:
- docker
imports:
- https://cnb.cool/<your-repo-slug>/-/blob/main/env.yml
stages:
- name: docker push
script: |
docker login -u ${DOCKER_USER} -p "${DOCKER_TOKEN}" ${CNB_DOCKER_REGISTRY}
docker build -t ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
Note: Not applicable to plugin tasks.
Cloud Native Build
now supports keystore repositories, providing higher security and supporting file reference auditing.
Supported file formats:
yaml
: Parses files with the.yml
or.yaml
extension.json
: Parses files with the.json
extension.plain
: Each line is in the formatkey=value
. This format is used for files with extensions not mentioned above. (Not recommended)
Priority of keys with the same name:
- When the
imports
configuration is an array, parameters defined in later files will override those with the same name in earlier files. - If there is a duplicate with the
env
parameter, the parameters inenv
will override those in theimports
file.
Variable assignment:
The imports file paths can be read from environment variables. If the import parameter is an array, later file paths can be derived from variables defined in earlier files.
// env.json
{
FILE: "https://xxx/env2.yml"
}
# env2.yml
TEST_TOKEN: some token
main:
push:
- imports:
- ./env1.json
- $FILE
stages:
- name: echo
script: echo $TEST_TOKEN
The referenced file can declare its accessibility, refer to File Reference Authorization.
Example:
team_name/project_name/*
, matches all repositories under a project:
key: value
allow_slugs:
- team_name/project_name/*
Allowed to be referenced by all repositories:
key: value
allow_slugs:
- "**"
In most cases, the configuration file is a simple single-level object, such as:
// env.json
{
"token": "private token",
"password": "private password"
}
To handle complex configuration files and scenarios, imports
supports nested objects!
If the imported file's parsed object contains deep-level properties (the first level cannot be an array),
it will be flattened into a single-level object according to the following rules:
- Property names are preserved, and property values are converted to strings.
- If a property value is an object (including arrays), it will be recursively flattened,
and the property path is connected with
_
.
// env.json
{
"key1": [
"value1",
"value2"
],
"key2": {
"subkey1": [
"value3",
"value4"
],
"subkey2": "value5"
},
"key3": [
"value6",
{
"subsubkey1": "value7"
}
],
"key4": "value8"
}
It will be flattened into:
{
// Original property values are converted to strings
"key1": "value1,value2",
// If the property value is an object, additional recursive flattening is performed to add properties
"key1_0": "value1",
"key1_1": "value2",
"key2": "[object Object]",
"key2_subkey1": "value3,value4",
"key2_subkey1_0": "value3",
"key2_subkey1_1": "value4",
"key2_subkey2": "value5",
"key3": "value6,[object Object]",
"key3_0": "value6",
"key3_1": "[object Object]",
"key3_1_subsubkey1": "value7",
"key4": "value8"
}
main:
push:
- imports:
- ./env.json
stages:
- name: echo
script: echo $key3_1_subsubkey1
# label
- type:
Object
Assign labels to the pipeline. Each label can be a string or an array of strings. These labels can be used to filter pipeline records later.
Here’s an example workflow: Merging into the main branch triggers a pre-release deployment, while tagging triggers a production release.
main:
push:
- label:
# Regular pipeline for the master branch
type:
- MASTER
- PREVIEW
stages:
- name: install
script: npm install
- name: CCK-lint
script: npm run lint
- name: BVT-build
script: npm run build
- name: UT-test
script: npm run test
- name: pre release
script: ./pre-release.sh
$:
tag_push:
- label:
# Regular pipeline for product releases
type: RELEASE
stages:
- name: install
script: npm install
- name: build
script: npm run build
- name: DELIVERY-release
script: ./release.sh
# stages
- type:
Array<Job>
Defines a set of stage tasks that run sequentially.
# failStages
- type:
Array<Job>
Defines a set of stage tasks to be executed in sequence when the normal workflow fails. These tasks will be executed sequentially.
# endStages
- type:
Array<Job>
Define a set of tasks to be executed at the end stage of the pipeline. When the pipeline stages/failStages are completed, these stage tasks will be executed in sequence before the pipeline ends.
When the pipeline's prepare stage is successful, endStages will be executed regardless of the success of the stages. Moreover, the success of endStages does not affect the pipeline's status (that is, even if endStages fail, the pipeline's status could still be successful).
Clicking the stop button will not execute the task.
# ifNewBranch
- type:
Boolean
- default:
false
If set to true
, the pipeline will only be executed if the current branch is a new branch.
It also means the pipeline's built-in environment variable CNB_IS_NEW_BRANCH
is true
.
When both
ifNewBranch
andifModify
are present, if either condition is met, the pipeline will be executed.
# ifModify
- type:
Array<String>
|String
The pipeline runs only when specified files change. Use a glob pattern string or string array to define the files.
Example 1:
The pipeline will be executed if the modified file list includes a.js
or b.js
.
ifModify:
- a.js
- b.js
Example 2:
The pipeline will be executed if the modified file list includes files with the .js
extension.
**/*.js
matches all .js
files in subdirectories, while *.js
matches .js
files in the root directory.
ifModify:
- "**/*.js"
- "*.js"
Example 3:
Reverse matching: exclude the 'legacy' directory and exclude all Markdown files, trigger when there are changes in other files.
ifModify:
- "**"
- "!(legacy/**)"
- "!(**/*.md)"
- "!*.md"
Example 4:
Reverse matching: trigger the pipeline when there are changes in the src
directory, except in the src/legacy
directory.
ifModify:
- "src/**"
- "!(src/legacy/**)"
# Supported Events
- For the
push
event on non-new branches, the changes betweenbefore
andafter
are analyzed to determine the modified files. - In the pipeline triggered by the
push
event on non-new branches, the same file change analysis is performed when thecnb:apply
event is triggered. - For events triggered by a pull request (
PR
), the modified files within the pull request are analyzed. - In events triggered by a pull request (
PR
) and further triggered by thecnb:apply
event, the modified files within the pull request are also analyzed.
Due to the potentially large number of modified files, the limit for file calculation is set to a maximum of 300 files.
# breakIfModify
- type:
Boolean
- default:
false
if set to true
, the pipeline will be terminated if the source branch has been updated before the job execution.
# retry
- type:
Number
- default:
0
Specifies the number of retry attempts for failed jobs. A value of 0
means no retries will be performed.
# allowFailure
- type:
Boolean
- default:
false
Specifies whether the current pipeline is allowed to fail.
When set to true
, the failure status of the pipeline will not be reported to CNB.
# lock
- type:
Object
|Boolean
Set a lock for the pipeline. The lock is automatically released after the pipeline completes and cannot be used across repositories.
Behavior: When pipeline A acquires the lock, if pipeline B requests the lock, it can either terminate pipeline A or wait for pipeline A to finish and release the lock before continuing execution.
key:
- type:
String
Custom lock name. By default, it is set to
branchName-pipelineName
, which means the lock scope is the current pipeline.- type:
expires:
- type:
Number
- default:
3600
(1 hour)
Lock expiration time in seconds. After expiration, the lock will be automatically released.
- type:
timeout:
- type:
Number
- default:
3600
(1 hour)
Timeout in seconds for waiting for the lock.
- type:
cancel-in-progress:
- type:
Boolean
- default:
false
Whether to terminate the pipeline that currently holds the lock or is waiting for the lock, allowing the current pipeline to acquire the lock and execute.
- type:
wait:
- type:
Boolean
- default:
false
Whether to wait if the lock is already acquired (without consuming pipeline resources and time). If set to
false
, an error will be thrown directly. Cannot be used together withcancel-in-progress
.- type:
cancel-in-wait:
- type:
Boolean
- default:
false
Whether to terminate the pipeline that is waiting for the lock and add the current pipeline to the waiting lock queue. Need to be used with the
wait
attribute.- type:
If lock
is true, then key
, expires
, timeout
, cancel-in-progress
, wait
,
and cancel-in-wait
will have their respective default values.
Example 1: Lock as a Boolean value
main:
push:
- lock: true
stages:
- name: stage1
script: echo "stage1"
Example 2: Lock as an Object
main:
push:
- lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # maximum wait time of 1 minute
stages:
- name: stage1
script: echo "stage1"
Example 3: Stop the previous running pipeline under a pull_request
main:
pull_request:
- lock:
key: pr
cancel-in-progress: true
stages:
- name: echo hello
script: echo "stage1"
Stage →