Pipeline Grammar
About 5504 wordsAbout 18 min
Basic Concepts
Trigger Branch
: Corresponds to the branch of the code repository, used to specify which branch the pipeline will build in.Trigger Event
: Specifies which event will trigger the build, supporting triggering multiple pipelines.Pipeline
: Represents a pipeline, containing one or more stagesStage
, eachStage
executed sequentially.Stage
: Represents a build stage, which can consist of one or more tasksJob
.Job
: The most basic task execution unit.
main: # Trigger branch
push: # Trigger event, corresponding to a build, can contain multiple pipelines. Can be an array or an object.
- name: pipeline-1 # Pipeline structure
stages:
- name: stage-1 # Stage structure
jobs:
- name: job-1 # Job structure
script: echo
main: # Trigger branch
push: # Trigger event, corresponding to a build, specifying pipelines via an object
pipeline-key:
stages:
- name: stage-1 # Stage structure
jobs:
- name: job-1 # Job structure
script: echo
Pipeline
Pipeline
represents a pipeline, containing one or more stages Stage
, each Stage
executed sequentially.
A basic Pipeline
configuration is as follows:
name: Pipeline name
docker:
image: node
build: dev/Dockerfile
volumes:
- /root/.npm:copy-on-write
git:
enable: true
submodules: true
lfs: true
services:
- docker
env:
TEST_KEY: TEST_VALUE
imports:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/envs.yml
- ./env.txt
label:
type: MASTER
class: MAIN
stages:
- name: stage 1
script: echo "stage 1"
- name: stage 2
script: echo "stage 2"
- name: stage 3
script: echo "stage 3"
failStages:
- name: fail stage 1
script: echo "fail stage 1"
- name: fail stage 2
script: echo "fail stage 2"
endStages:
- name: end stage 1
script: echo "end stage 1"
- name: end stage 2
script: echo "end stage 2"
ifModify:
- a.txt
- "src/**/*"
retry: 3
allowFailure: false
name
- type:
String
Specify the pipeline name, default is pipeline
. When there are multiple parallel pipelines, the default pipeline names are pipeline
, pipeline-1
, pipeline-2
, and so on. You can define name
to distinguish different pipelines.
runner
- type:
Object
Specify parameters related to the build node.
tags
: Optional, specify which tags the build node should have.cpus
: Optional, specify the number of CPU cores to use for the build.
tags
- type:
String
|Array<String>
- default:
cnb:arch:default
Specify which tags the build node should have. See Build Nodes for details.
Example:
main:
push:
- runner:
tags: cnb:arch:amd64
stages:
- name: uname
script: uname -a
cpus
- type:
Number
Specify the maximum number of CPU cores to use for the build (memory = CPU cores * 2 GB), where CPU and memory do not exceed the actual size of the runner machine.
If not configured, the maximum available CPU cores are determined by the runner machine configuration.
Example:
# cpus = 1, memory = 2G
main:
push:
- runner:
cpus: 1
stages:
- name: echo
script: echo "hello world"
docker
- type:
Object
Specify parameters related to docker
. See Build Environment for details.
image
: The environment image for the currentPipeline
. All tasks under thisPipeline
will be executed in this image environment.build
: Specify aDockerfile
to build a temporary image, used as the value forimage
.volumes
: Declare data volumes for caching scenarios.
image
- type:
Object
|String
Specify the environment image for the current Pipeline
. All tasks under this Pipeline
will be executed in this image environment, supporting environment variable references.
image.name
:String
Image name, e.g.,node:20
.image.dockerUser
:String
Specify the Docker username for pulling the specified image.image.dockerPassword
:String
Specify the Docker password for pulling the specified image.
If image
is specified as a string, it is equivalent to specifying image.name
.
Example 1, using a public image:
main:
push:
- docker:
# Use the node:20 image from the official Docker repository as the build container
image: node:20
stages:
- name: show version
script: node -v
Example 2, using a private image from CNB Artifact:
main:
push:
- docker:
# Use a non-public image as the build container, requiring Docker username and password
image:
name: docker.cnb.cool/images/pipeline-env:1.0
# Use environment variables injected by default during CI builds
dockerUser: $CNB_TOKEN_USER_NAME
dockerPassword: $CNB_TOKEN
stages:
- name: echo
script: echo "hello world"
Example 3, using a private image from the official Docker repository:
main:
push:
- imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/docker.yml
docker:
# Use a non-public image as the build container, requiring Docker username and password
image:
name: images/pipeline-env:1.0
# Environment variables imported from docker.yml
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- name: echo
script: echo "hello world"
docker.yml
DOCKER_USER: user
DOCKER_PASSWORD: password
build
- type:
Object
|String
Specify a Dockerfile
to build a temporary image, used as the value for image
, supporting environment variable references.
For a complete example of declaring a build environment using build
, please refer to docker-build-with-by.
Below is an explanation of each parameter under build
:
build.dockerfile
:- type:
String
Path to the
Dockerfile
.- type:
build.target
:- type:
String
Corresponds to the
--target
parameter indocker build
, allowing selective building of specific stages in the Dockerfile instead of the entire Dockerfile.- type:
build.by
:- type:
Array<String>
|String
Declare a list of files that the build process depends on for caching.
Note: Files not listed in
by
are treated as non-existent during the image build process, except for the Dockerfile.If
String
, multiple files can be separated by commas.- type:
build.versionBy
:- type:
Array<String>
|String
Used for version control. Changes to the content of the specified files will trigger a new version. The calculation logic follows this expression:
sha1(dockerfile + versionBy + buildArgs)
.If
String
, multiple files can be separated by commas.- type:
build.buildArgs
:- type:
Object
Insert additional build parameters during the build (
--build-arg $key=$value
). If the value isnull
, only the key is added (--build-arg $key
).- type:
build.ignoreBuildArgsInVersion
:- type:
Boolean
Whether to ignore
buildArgs
in version calculation. SeeversionBy
.- type:
build.sync
:- type:
String
Whether to wait for
docker push
to complete before continuing. Default isfalse
.- type:
If build
is specified as a string, it is equivalent to specifying build.dockerfile
.
Dockerfile Usage:
main:
push:
- docker:
# Specify the build environment via Dockerfile
build: ./image/Dockerfile
stages:
- stage1
- stage2
- stage3
main:
push:
- docker:
# Specify the build environment via Dockerfile
build:
dockerfile: ./image/Dockerfile
dockerImageName: cnb.cool/project/images/pipeline-env
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- stage1
- stage2
- stage3
main:
push:
- docker:
# Specify the build environment via Dockerfile
build:
dockerfile: ./image/Dockerfile
# Only build the builder stage, not the entire Dockerfile
target: builder
stages:
- stage1
- stage2
- stage3
Dockerfile versionBy Usage:
Example: Cache pnpm in the environment image to speed up subsequent pnpm i
processes
main:
push:
# Specify the build environment via Dockerfile
- docker:
build:
dockerfile: ./Dockerfile
versionBy:
- package-lock.json
stages:
- name: pnpm i
script: pnpm i
- stage1
- stage2
- stage3
FROM node:20
# Replace with the actual source
RUN npm config set registry https://xxx.com/npm/ \
&& npm i -g pnpm \
&& pnpm config set store-dir /lib/pnpm
WORKDIR /data/orange-ci/workspace
COPY package.json package-lock.json ./
RUN pnpm i
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 100
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 141
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 272
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 444, done
[pnpm i]
[pnpm i] dependencies:
[pnpm i] + mocha 8.4.0 (10.0.0 is available)
[pnpm i]
[pnpm i] devDependencies:
[pnpm i] + babel-eslint 9.0.0 (10.1.0 is available)
[pnpm i] + eslint 5.16.0 (8.23.0 is available)
[pnpm i] + jest 27.5.1 (29.0.2 is available)
[pnpm i]
[pnpm i]
[pnpm i] Job finished, duration: 6.8s
volumes
- type:
Array<String>
|String
Declare data volumes. Multiple volumes can be passed as an array or separated by commas, supporting environment variable references. Supported formats:
<group>:<path>:<type>
<path>:<type>
<path>
Meanings:
group
: Optional, volume group. Different groups are isolated from each other.path
: Required, absolute path to mount the volume. Supports absolute paths (starting with/
) or relative paths (starting with./
), relative to the workspace.type
: Optional, volume type. Default iscopy-on-write
. Supported types:read-write
orrw
: Read-write. Concurrent write conflicts must be handled manually. Suitable for serial build scenarios.read-only
orro
: Read-only. Write operations throw exceptions.copy-on-write
orcow
: Read-write. Changes (add, modify, delete) are merged after a successful pipeline. Suitable for concurrent build scenarios.copy-on-write-read-only
: Read-only. Changes (add, delete, modify) are discarded after the pipeline ends.data
: Create a temporary data volume, automatically cleaned up after the pipeline ends.
copy-on-write
Used for caching scenarios, supports concurrency.
copy-on-write
technology allows the system to share the same data copy until modifications are needed, enabling efficient cache replication. In concurrent environments, this method avoids read-write conflicts because private copies of data are only created when modifications are actually needed. Thus, only write operations cause data replication, while read operations can safely proceed in parallel without worrying about data consistency. This mechanism significantly improves performance, especially in read-heavy caching scenarios.
data
Used for data sharing, sharing specified directories in the container with other containers.
By creating a data volume and mounting it into each container. Unlike directly mounting directories from the build node into the container, if the specified directory already exists in the container, its contents are automatically copied to the data volume instead of directly overwriting the container directory.
volumes Examples
Example 1: Mount directories from the build node into the container for local caching
main:
push:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use cache and update simultaneously
- /root/.npm
# Use main cache and update simultaneously
- main:/root/.gradle:copy-on-write
stages:
- stage1
- stage2
- stage3
pull_request:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use copy-on-write cache
- /root/.npm
- node_modules
# PR uses main cache but does not update
- main:/root/.gradle:copy-on-write-read-only
stages:
- stage1
- stage2
- stage3
Example 2: Share files packaged in the container with other containers
# .cnb.yml
main:
push:
- docker:
image: go-app-cli # Assume a Go application is in the /go-app/cli path in the image
# Declare data volumes
volumes:
# This path exists in the go-app-cli image, so when the environment image is executed, its contents are copied to a temporary data volume for sharing with other task containers
- /go-app
stages:
- name: show /go-app-cli in job container
image: alpine
script: ls /go-app
git
- type:
Object
Provides Git repository-related configurations.
git.enable
- type:
Boolean
- default:
true
Specifies whether to fetch the code.
For branch.delete
events, the default is false
. For other events, the default is true
.
git.submodules
- type:
Object
|Boolean
- default:
enable
:true
remote
:false
Specifies whether to pull submodules.
When the value is of type Boolean
, it is equivalent to setting git.submodules.enable
to the value of git.submodules
, and git.submodules.remote
to the default value false
.
git.submodules.enable
- type:
Boolean
- default:
true
Specifies whether to pull submodules.
git.submodules.remote
- type:
Boolean
- default:
false
Determines whether to add the --remote
parameter when executing git submodule update
, which ensures the latest code of the submodule is pulled each time.
Basic Usage:
main:
push:
- git:
enable: true
submodules: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
submodules:
enable: true
remote: true
stages:
- name: echo
script: echo "hello world"
git.lfs
- type:
Object
|Boolean
- default:
true
Specifies whether to fetch LFS files.
Supports Object
format to specify specific parameters. If fields are omitted, the default values are:
{
"enable": true
}
Basic Usage:
main:
push:
- git:
enable: true
lfs: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
lfs:
enable: true
stages:
- name: echo
script: echo "hello world"
git.lfs.enable
Specifies whether to fetch LFS files.
services
- type:
Array<String>
Declares services required during the build, in the format: name:[version]
, where version
is optional.
Currently supported services:
- docker
- vscode
service:docker
Enables the dind
service.
When operations like docker build
or docker login
are needed during the build, declare this to automatically inject docker daemon
and docker cli
into the environment.
Example:
main:
push:
- services:
- docker
docker:
image: alpine
stages:
- name: docker info
script:
- docker info
- docker ps
This service automatically logs into the CNB Docker Artifact repository's image source (docker.cnb.cool
). Subsequent tasks can directly docker push
to the current repository's Docker Artifact.
Example:
main:
push:
- services:
- docker
stages:
- name: build and push
script: |
# Dockerfile exists in the root directory
docker build -t ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
service:vscode
Declared when remote development is needed.
Example:
$:
vscode:
- services:
- vscode
- docker
docker:
image: alpine
stages:
- name: uname
script: uname -a
env
- type:
Object
Specifies environment variables. Defines a set of environment variables for use during task execution. Effective for all non-plugin tasks in the current Pipeline
.
imports
- type:
Array<String>
|String
Specify the file path of the CNB Git repository (either a relative path or an HTTP address) to read the file as a source of environment variables.
Local relative paths such as ./env.yml
will be concatenated into a remote HTTP file address for loading.
Cloud Native Build
now supports Keystore, offering enhanced security and file reference auditing.
Typically, a Keystore is used to store account credentials such as those for npm
and docker
.
Example:
#env.yml
DOCKER_USER: "username"
DOCKER_TOKEN: "token"
DOCKER_REGISTRY: "https://xxx/xxx"
#.cnb.yml
main:
push:
- services:
- docker
imports:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/env.yml
stages:
- name: docker push
script: |
docker login -u ${DOCKER_USER} -p "${DOCKER_TOKEN}" ${CNB_DOCKER_REGISTRY}
docker build -t ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
Note: Not effective for plugin tasks.
Supported file formats:
yaml
: Parses files with.yml
or.yaml
extensions.json
: Parses files with.json
extensions.plain
: Each line is inkey=value
format. All other extensions are parsed this way. (Not recommended)
Priority for duplicate keys:
- When
imports
is an array, duplicate parameters are overwritten by later configurations. - If a parameter duplicates one in
env
, theenv
parameter will override the one from theimports
file.
Variable Assignment
Paths in imports
files can reference environment variables. If an array, subsequent file paths can reference variables from earlier files.
// env.json
{
FILE: "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/env2.yml"
}
# env2.yml
TEST_TOKEN: some token
main:
push:
- imports:
- ./env1.json
- $FILE
stages:
- name: echo
script: echo $TEST_TOKEN
Referenced files can declare accessible scopes. See Configuration File Reference Authentication.
Example:
team_name/project_name/*
, matching all repositories under a project:
key: value
allow_slugs:
- team_name/project_name/*
Allow references from all repositories:
key: value
allow_slugs:
- "**"
Most configuration files are simple single-layer objects, such as:
// env.json
{
"token": "private token",
"password": "private password"
}
To handle complex configuration files and scenarios, imports
supports nested objects! If the parsed object from the imported file contains deep properties (the first layer cannot be an array), it will be flattened into a single-layer object with the following rules:
- Property names are retained, and property values are converted to strings.
- If a property value is an object (including arrays), it will be recursively flattened, with property paths connected by
_
.
// env.json
{
"key1": [
"value1",
"value2"
],
"key2": {
"subkey1": [
"value3",
"value4"
],
"subkey2": "value5"
},
"key3": [
"value6",
{
"subsubkey1": "value7"
}
],
"key4": "value8"
}
Will be flattened into:
{
// Original property values converted to strings
"key1": "value1,value2",
// If a property value is an object, additional recursive flattening is performed to add properties
"key1_0": "value1",
"key1_1": "value2",
"key2": "[object Object]",
"key2_subkey1": "value3,value4",
"key2_subkey1_0": "value3",
"key2_subkey1_1": "value4",
"key2_subkey2": "value5",
"key3": "value6,[object Object]",
"key3_0": "value6",
"key3_1": "[object Object]",
"key3_1_subsubkey1": "value7",
"key4": "value8"
}
main:
push:
- imports:
- ./env.json
stages:
- name: echo
script: echo $key3_1_subsubkey1
label
- type:
Object
Assigns labels to the pipeline. Each label value can be a string or an array of strings. These labels can be used for subsequent pipeline record filtering and other functions.
Here is an example workflow: Merge the main branch to release to the pre-release environment, and tag to release to the production environment.
main:
push:
- label:
# Regular pipeline for the Master branch
type:
- MASTER
- PREVIEW
stages:
- name: install
script: npm install
- name: CCK-lint
script: npm run lint
- name: BVT-build
script: npm run build
- name: UT-test
script: npm run test
- name: pre release
script: ./pre-release.sh
$:
tag_push:
- label:
# Regular pipeline for the product release branch
type: RELEASE
stages:
- name: install
script: npm install
- name: build
script: npm run build
- name: DELIVERY-release
script: ./release.sh
stages
- type:
Array<Stage|Job>
Defines a set of stage tasks, each executed sequentially.
failStages
- type:
Array<Stage|Job>
Defines a set of failure stage tasks. Executed sequentially when the normal flow fails.
endStages
- type:
Array<Stage|Job>
Defines a set of tasks executed at the end of the pipeline. After the pipeline stages/failStages complete, these tasks are executed sequentially before the pipeline ends.
If the pipeline prepare stage succeeds, endStages will execute regardless of whether stages succeed. The success of endStages does not affect the pipeline status (i.e., endStages can fail while the pipeline status is success).
ifNewBranch
- type:
Boolean
- default:
false
If true
, the Pipeline
executes only if the current branch is new (i.e., CNB_IS_NEW_BRANCH
is true
).
If both
ifNewBranch
andifModify
exist, thePipeline
executes if either condition is met.
ifModify
- type:
Array<String>
|String
Specifies that the Pipeline
executes only if the specified files are modified. A glob
expression string or string array.
Example 1:
Executes if the modified file list includes a.js
or b.js
.
ifModify:
- a.js
- b.js
Example 2:
Executes if the modified file list includes files with the js
extension. **/*.js
matches all js
files in subdirectories, *.js
matches all js
files in the root directory.
ifModify:
- "**/*.js"
- "*.js"
Example 3:
Reverse matching, excluding the legacy
directory and all Markdown files, triggering on other file changes.
ifModify:
- "**"
- "!(legacy/**)"
- "!(**/*.md)"
- "!*.md"
Example 4:
Reverse matching, triggering if there are changes in the src
directory except for src/legacy
.
ifModify:
- "src/**"
- "!(src/legacy/**)"
Supported Events
push
events for non-new branches, comparingbefore
andafter
to count modified files.- Events triggered via
cnb:apply
inpush
event pipelines for non-new branches, following the same modified file counting rules. - Events triggered by
PR
, counting modified files in thePR
. - Events triggered via
cnb:apply
inPR
-triggered events, counting modified files in thePR
.
Because file changes can be numerous, modified file counting is limited to a maximum of 300.
breakIfModify
- type:
Boolean
- default:
false
Terminates the build if the source branch is updated before the Job
executes.
skipIfModify
- type:
Boolean
- default:
false
Skips the current Job
if the source branch is updated before execution.
retry
- type:
Number
- default:
0
Number of retries on failure. 0
means no retries.
allowFailure
- type:
Boolean
- default:
false
Whether the current pipeline is allowed to fail.
When set to true
, the pipeline's failure status will not be reported to CNB.
lock
- type:
Object
|Boolean
Sets a lock for the pipeline
. The lock is automatically released after the pipeline
completes. Locks cannot be used across repositories.
Behavior: After pipeline A acquires the lock, pipeline B requests the lock. It can either terminate A or wait for A to release the lock before acquiring it and continuing.
key:
- type:
String
Custom lock name. Default is
branch name-pipeline name
, meaning the lock scope is the currentpipeline
.- type:
expires:
- type:
Number
- default:
3600
(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
timeout:
- type:
Number
- default:
3600
(one hour)
Timeout duration for waiting for the lock, in seconds.
- type:
cancel-in-progress:
- type:
Boolean
- default:
false
Whether to terminate pipelines occupying or waiting for the lock, allowing the current pipeline to acquire the lock and execute.
- type:
wait:
- type:
Boolean
- default:
false
Whether to wait if the lock is occupied (without consuming pipeline resources or time). If
false
, an error is thrown immediately. Cannot be used withcancel-in-progress
.- type:
cancel-in-wait:
- type:
Boolean
- default:
false
Whether to terminate pipelines waiting for the lock, allowing the current pipeline to join the lock queue. Requires the
wait
property.- type:
If lock
is true
, key
, expires
, timeout
, cancel-in-progress
, wait
, and cancel-in-wait
take their default values.
Example 1: lock
as a Boolean
main:
push:
- lock: true
stages:
- name: stage1
script: echo "stage1"
Example 2: lock
as an Object
main:
push:
- lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Maximum wait of 1 minute
stages:
- name: stage1
script: echo "stage1"
Example 3: Terminate the currently running pipeline under pull_request
main:
pull_request:
- lock:
key: pr
cancel-in-progress: true
stages:
- name: echo hello
script: echo "stage1"
Stage
- type:
Job
|Object<name: Job>
Stage
represents a build stage, which can consist of one or more Jobs
. See Job Introduction.
Single Job
If a Stage
has only one Job
, the Stage
can be omitted, and the Job
can be written directly.
stages:
- name: stage1
jobs:
- name: job A
script: echo hello
Can be simplified to:
- stages:
- name: job A
script: echo hello
When Job
is a string, it can be treated as a script task, with name
and script
set to the string. Further simplified:
- stages:
- echo hello
Serial Jobs
When the value is an array (ordered), the Jobs
in the group will execute sequentially.
# Serial
stages:
- name: install
jobs:
- name: job1
script: echo "job1"
- name: job2
script: echo "job2"
Parallel Jobs
When the value is an object (unordered), the Jobs
in the group will execute in parallel.
# Parallel
stages:
- name: install
jobs:
job1:
script: echo "job1"
job2:
script: echo "job2"
Multiple Jobs
can be flexibly organized in serial or parallel. Example of serial followed by parallel:
main:
push:
- stages:
- name: serial first
script: echo "serial"
- name: parallel
jobs:
parallel job 1:
script: echo "1"
parallel job 2:
script: echo "2"
- name: serial next
script: echo "serial next"
name
- type:
String
Stage
name.
ifNewBranch
- type:
Boolean
- default:
false
If true
, the Stage
executes only if the current branch is new (i.e., CNB_IS_NEW_BRANCH
is true
).
If
ifNewBranch
,ifModify
, orif
conditions are met, theStage
will execute.
ifModify
- type:
Array<String>
|String
Specifies that the Stage
executes only if the specified files are modified. A glob
matching expression string or string array.
if
- type:
Array<String>
|String
One or more Shell scripts. The exit code determines whether the Stage
executes. If the exit code is 0
, the step will execute.
Example 1: Check the value of a variable
main:
push:
- env:
IS_NEW: true
stages:
- name: is new
if: |
[ "$IS_NEW" = "true" ]
script: echo is new
- name: is not new
if: |
[ "$IS_NEW" != "true" ]
script: echo not new
Example 2: Check the output of a task
main:
push:
- stages:
- name: make info
script: echo 'haha'
exports:
info: RESULT
- name: run if RESULT is haha
if: |
[ "$RESULT" = "haha" ]
script: echo $RESULT
env
- type:
Object
Same as Pipeline env, but only effective for the current Stage
.
Stage env
has higher priority than Pipeline env
.
imports
- type:
Array<String>
|String
Same as Pipeline imports, but only effective for the current Stage
.
retry
- type:
Number
- default:
0
Number of retries on failure. 0
means no retries.
lock
- type:
Boolean
|Object
Sets a lock for the Stage
. The lock is automatically released after the Stage
completes. Locks cannot be used across repositories.
Behavior: After task A acquires the lock, task B requests the lock and must wait for the lock to be released before acquiring it and continuing.
lock.key
- type:
String
Custom lock name. Default is
branch name-pipeline name-stage index
.- type:
lock.expires
- type:
Number
- default:
3600
(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
lock.wait
- type:
Boolean
- default:
false
Whether to wait if the lock is occupied.
- type:
lock.timeout
- type:
Number
- default:
3600
(one hour)
Specifies the timeout duration for waiting for the lock, in seconds.
- type:
If lock
is true
, key
, expires
, timeout
, cancel-in-progress
, wait
, and cancel-in-wait
take their default values.
Example 1: lock
as a Boolean
main:
push:
- stages:
- name: stage1
lock: true
jobs:
- name: job1
script: echo "job1"
Example 2: lock
as an Object
main:
push:
- stages:
- name: stage1
lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Maximum wait of 1 minute
jobs:
- name: job1
script: echo "job1"
image
- type:
Object
|String
Specifies the environment image for the current Stage
. All tasks in this Stage
will default to executing in this image environment.
image.name
:String
Image name, e.g.,node:20
.image.dockerUser
:String
Docker username for pulling the specified image.image.dockerPassword
:String
Docker password for pulling the specified image.
If image
is a string, it is equivalent to specifying image.name
.
jobs
- type:
Array<Job>
|Object<name,Job>
Defines a group of tasks, each executed sequentially or in parallel.
- If the value is an array (ordered), the
Jobs
will execute sequentially. - If the value is an object (unordered), the
Jobs
will execute in parallel.
Job
Job
is the most basic task execution unit, divided into three categories:
Built-in Tasks
type:
- type:
String
Specifies the built-in task to execute.
- type:
options:
- type:
Object
Specifies parameters for the built-in task.
- type:
optionsFrom:
Array<String>
|String
Specifies local or Git repository file paths to load as built-in task parameters. Similar to
imports
, ifoptionsFrom
is an array, duplicate parameters are overwritten by later configurations.
options
fields have higher priority than optionsFrom
.
Reference file permission control: Configuration File Reference Authentication.
Example:
name: install
type: INTERNAL_JOB_NAME
optionsFrom: ./options.json
options:
key1: value1
key2: value2
// ./options.json
{
"key1": "value1",
"key2": "value2"
}
Script Tasks
- name: install
script: npm install
script:
- type:
Array<String>
|String
Specifies the
shell
script to execute. Arrays are joined with&&
by default.If the
script
should run in its own environment rather than the pipeline's environment, specify the runtime environment via theimage
property.- type:
image:
- type:
String
Specifies the runtime environment.
- type:
Example:
- name: install
image: node:20
script: npm install
Script tasks can be simplified to a string, where script
is the string and name
is the first line:
- echo hello
Equivalent to:
- name: echo hello
script: echo hello
Plugin Tasks
Plugins are Docker images, also called image tasks.
Unlike the above two types, plugin tasks
offer more flexible execution environments. They are easier to share within teams, companies, or even across CI systems.
Plugin tasks
pass environment variables to ENTRYPOINT
to hide internal implementations.
Note: Custom environment variables set via imports
, env
, etc., are not passed to plugins but can be used in settings
or args
for variable substitution. CNB system environment variables are still passed to plugins.
name:
- type:
String
Specifies the
Job
name.- type:
image:
- type:
String
The full path of the image.
- type:
settings:
- type:
Object
Specifies plugin task parameters. Follow the documentation provided by the image. Environment variables can be referenced via
$VAR
or${VAR}
.- type:
settingsFrom:
type:
Array<String>
|String
Specifies local or Git repository file paths to load as plugin task parameters.
Priority:
- Duplicate parameters are overwritten by later configurations.
settings
fields have higher priority thansettingsFrom
.
Reference file permission control: Configuration File Reference Authentication.
Example:
Restricting both images
and slugs
:
allow_slugs:
- a/b
allow_images:
- a/b
Restricting only images
, not slug
:
allow_images:
- a/b
settingsFrom
can be written in Dockerfile
:
FROM node:20
LABEL cnb.cool/settings-from="https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/settings.json"
Examples
with imports:
- name: npm publish
image: plugins/npm
imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm.json
settings:
username: $NPM_USER
password: $NPM_PASS
email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./
{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/**/**"],
"allow_images": ["plugins/npm"]
}
with settingsFrom:
- name: npm publish
image: plugins/npm
settingsFrom: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm-settings.json
settings:
# username: $NPM_USER
# password: $NPM_PASS
# email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./
{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/cnb"],
"allow_images": ["plugins/npm"]
}
name
- type:
String
Specifies the Job
name.
ifModify
- type:
Array<String>
|String
Same as Stage ifModify. Only effective for the current Job
.
ifNewBranch
- type:
Boolean
- default:
false
Same as Stage ifNewBranch. Only effective for the current Job
.
if
- type:
Array<String>
|String
Same as Stage if. Only effective for the current Job
.
breakIfModify
- type:
Boolean
- default:
false
Same as Pipeline breakIfModify. Only effective for the current Job
.
skipIfModify
- type:
Boolean
- default:
false
Skips the current Job
if the source branch is updated before execution.
env
- type:
Object
Same as Stage env, but only effective for the current Job
.
Job env
has higher priority than Pipeline env
and Stage env
.
imports
- type:
Array<String>
|String
Same as Stage imports, but only effective for the current Job
.
exports
- type:
Object
After Job
execution, a result
object is generated. exports
can export properties from result
to environment variables, with a lifecycle of the current Pipeline
.
See Environment Variables for details.
timeout
- type:
Number
|String
Sets a timeout for a single task. Default is 1 hour, maximum is 12 hours.
Effective for script-job
and image-job
.
Also supports the following units:
ms
: Milliseconds (default)s
: Secondsm
: Minutesh
: Hours
name: timeout job
script: sleep 1d
timeout: 100s # Task will timeout and exit after 100 seconds
See Timeout Strategy for details.
allowFailure
- type:
Boolean
|String
- default:
false
If true
, failure of this step does not affect subsequent execution or the final result.
If String
, environment variables can be read.
lock
- type:
Object
|Boolean
Sets a lock for the Job
. The lock is automatically released after the Job
completes. Locks cannot be used across repositories.
Behavior: After task A acquires the lock, task B requests the lock and must wait for the lock to be released before acquiring it and continuing.
lock.key
- type:
String
Custom lock name. Default is
branch name-pipeline name-stage index-job name
.- type:
lock.expires
- type:
Number
- default:
3600
(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
lock.wait
- type:
Boolean
- default:
false
Whether to wait if the lock is occupied.
- type:
lock.timeout
- type:
Number
- default:
3600
(one hour)
Specifies the timeout duration for waiting for the lock, in seconds.
- type:
If lock
is true
, key
, expires
, timeout
, cancel-in-progress
, wait
, and cancel-in-wait
take their default values.
Example 1: lock
as a Boolean
name: Lock
lock: true
script: echo 'job lock'
Example 2: lock
as an Object
name: Lock
lock:
key: key
expires: 10
wait: true
script: echo 'job lock'
retry
- type:
Number
- default:
0
Number of retries on failure. 0
means no retries.
type
- type:
String
Specifies the built-in task to execute.
options
- type:
Object
Specifies parameters for the built-in task.
optionsFrom
- type:
Array<String>
|String
Specifies local or Git repository file paths to load as built-in task parameters. Similar to imports
, if optionsFrom
is an array, duplicate parameters are overwritten by later configurations.
script
- type:
Array<String>
|String
Specifies the script to execute. Arrays are joined with &&
. The script's exit code determines the Job
's exit code.
Note: The default shell interpreter for the pipeline's base image is sh
. Different images may use different interpreters.
commands
- type:
Array<String>
|String
Same as script
, but with higher priority. Mainly for compatibility with Drone CI
syntax.
image
- type:
Object
|String
Specifies the image to use as the current Job
's execution environment, for docker image as env
or docker image as plugins
.
image.name
:String
Image name, e.g.,node:20
.image.dockerUser
:String
Docker username for pulling the specified image.image.dockerPassword
:String
Docker password for pulling the specified image.
If image
is a string, it is equivalent to specifying image.name
.
settings
- type:
Object
Specifies parameters required for the plugin task. See Plugin Tasks for details.
settingsFrom
Array<String>
|String
Specifies local or Git repository file paths to load as plugin task parameters. Similar to imports
, if settingsFrom
is an array, duplicate parameters are overwritten by later configurations.
See Plugin Tasks for details.
args
Array<String>
Specifies arguments passed to the image during execution, appended to ENTRYPOINT
. Only supports arrays.
- name: npm publish
image: plugins/npm
args:
- ls
Will execute:
docker run plugins/npm ls
Task Exit Codes
- 0: Task succeeds, continue execution.
- 78: Task succeeds but interrupts the current
Pipeline
. Can be used in custom scripts withexit 78
to interrupt the pipeline. - Other:
Number
, task fails and interrupts the currentPipeline
.