Pipeline Grammar
About 6903 wordsAbout 23 min
Basic Concepts
Trigger Branch
: Corresponds to the branch of the code repository, used to specify which branch the pipeline will build in.Trigger Event
: Specifies which event will trigger the build, supporting triggering multiple pipelines.Pipeline
: Represents a pipeline, containing one or more stagesStage
, eachStage
executed sequentially.Stage
: Represents a build stage, which can consist of one or more tasksJob
.Job
: The most basic task execution unit.
master: # Trigger branch
push: # Trigger event, corresponding to a build, can contain multiple pipelines. Can be an array or an object.
- name: pipeline-1 # Pipeline structure
stages:
- name: stage-1 # Stage structure
jobs:
- name: job-1 # Job structure
script: echo
master: # Trigger branch
push: # Trigger event, corresponding to a build, specifying pipelines via an object
pipeline-key:
stages:
- name: stage-1 # Stage structure
jobs:
- name: job-1 # Job structure
script: echo
Trigger Branch
Matching Patterns
Branch names support matching via glob
patterns (What is glob matching?), such as:
.push_pipeline: &push_pipeline
- stages:
- name: do something
script: echo "do something"
# Match all branches starting with dev/
"dev/*":
push:
- *push_pipeline
# Match main or dev branches
"(main|dev)":
push:
- *push_pipeline
# Match all branches except main and dev
"**/!(main|dev)":
push:
- *push_pipeline
# Match all branches
"**":
push:
- *push_pipeline
# Fallback, match branches not matched by glob
"$":
push:
- *push_pipeline
tag_push:
- *push_pipeline
Matching Strategy
Phased matching by priority; only if no match is found will the next phase be attempted.
glob
matches branch names.$
fallback matches all branches not matched byglob
.
If multiple glob
rules match, all matched pipelines will be executed in parallel.
Trigger Event
The following lists the events supported by Cloud Native Build
.
Branch Events
Events triggered by changes to remote code branches.
push
Triggered when a branch is pushed
.
branch.create
Triggered when a branch is created
, and will also trigger the push
event.
branch.delete
Triggered when a branch is deleted
.
Pipeline configurations can be attached to branch names
or $
.
The pipeline will use the default branch configuration file (reason: the branch is deleted when the pipeline runs).
Example:
dev/1:
branch.delete:
- stages:
- name: echo
# CNB_BRANCH value is the deleted branch
script: echo $CNB_BRANCH
$:
branch.delete:
- stages:
- name: echo
# CNB_BRANCH value is the deleted branch
script: echo $CNB_BRANCH
Pull Request Events
Events triggered by operations related to remote code Pull Request
(referred to as PR
below).
pull_request
Triggered by creation
, reopening
of a PR
, or push
to the source branch. The difference from pull_request.target
is explained in Configuration File.
pull_request.update
Triggered by creation
, reopening
of a PR
, push
to the source branch, or changes to the PR
's title
or description
.
pull_request
is a subset of pull_request.update
,i.e., changes to the PR
's title
or description
will trigger pull_request.update
but not pull_request
.
PR
creation
, reopening
, or push
to the source branch will trigger both pull_request
and pull_request.update
.
pull_request.target
Triggered by creation
, reopening
of a PR
, or push
to the source branch. The difference from pull_request
is explained in Configuration File.
pull_request.mergeable
Triggered when an open PR
meets the following conditions:
- The target branch is a protected branch, with the following rules checked:
- Requires reviewer approval.
- Must pass status checks (optional).
- Mergeable:
- No code conflicts.
- Status checks passed (if required).
- Review approved.
pull_request.merged
Triggered when a PR
is successfully merged.
Tips
Merging branch a into branch b will trigger pull_request.merged
and push
events for branch b.
pull_request.approved
Triggered when a user reviews a PR
and allows merging
.
Tips
If the protected branch requires multiple reviewer approvals, a user's approval does not mean the PR
is in an approved state.
pull_request.changes_requested
Triggered when a user reviews a PR
and requests changes
.
pull_request.comment
Triggered when a comment is created on a PR
.
Tag Events
Events triggered by operations related to remote code and page Tag
.
tag_push
Triggered when a Tag
is pushed
.
Example:
# For specific tags
v1.0.*:
tag_push:
- stages:
- name: echo tag name
script: echo $CNB_BRANCH
# For all tags
$:
tag_push:
- stages:
- name: echo tag name
script: echo $CNB_BRANCH
auto_tag
Automatically generated Tag
events.
Trigger Method
Only triggered by clicking the
Auto Generate Tag
button on the repository'sTag
list page.Implementation Principle
Starts a pipeline, using the cnbcool/git-auto-tag plugin by default to automatically generate
Tag
.Format Description
Tag
defaults to the format like3.8.11
. If the latestTag
starts withv
, the automatically generatedTag
will also includev
, e.g.,v4.1.9
.Custom
auto_tag
Event PipelineUsers can add a
.cnb.yml
file in the root directory with the following configuration to override the default template.# User-defined .cnb.yml configuration main: # Default branch, can be replaced with the repository's actual default branch auto_tag: # Event name - stages: - name: auto tag image: "cnbcool/git-auto-tag:latest" settings: tagFormat: 'v\${version}' branch: $CNB_BRANCH repoUrlHttps: $CNB_REPO_URL_HTTPS exports: tag: NEW_TAG - name: show tag script: echo $NEW_TAG
tag_deploy.*
Events triggered by the Deploy
button on the repository Tag/Release
page.
Refer to Deployment for details.
Issue Events
Events triggered by operations related to Issue
.
Issue
event pipeline configurations must be attached to $
.
issue.open
Triggered when an Issue
is created.
issue.close
Triggered when an Issue
is closed.
issue.reopen
Triggered when an Issue
is reopened.
api_trigger Custom Events
Event names are api_trigger
or start with api_trigger_
, such as api_trigger_test
.
API custom events can be triggered in the following three ways:
- Triggered by cnb:apply
- Triggered by cnb:trigger
- Triggered by OPENAPI
Methods 1 and 2 are wrappers for method 3
web_trigger Custom Events
Event names are web_trigger
or start with web_trigger_
, such as web_trigger_test
.
Only triggered on the page.
Use Cases:
- Can be combined with Deployment capabilities.
- Custom buttons in the page Custom Buttons.
Manual trigger builds (supports inputting environment variables, only triggers web_trigger
events).
Scheduled Task Events
Events triggered by scheduled tasks.
Refer to Scheduled Tasks for details.
Workspaces Events
Events triggered by clicking the Workspaces
button on the page.
Refer to Workspaces for details.
Pipeline
Pipeline
represents a pipeline, containing one or more stages Stage
, each Stage
executed sequentially.
A basic Pipeline
configuration is as follows:
name: Pipeline name
docker:
image: node
build: dev/Dockerfile
volumes:
- /root/.npm:copy-on-write
git:
enable: true
submodules: true
lfs: true
services:
- docker
env:
TEST_KEY: TEST_VALUE
imports:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/envs.yml
- ./env.txt
label:
type: MASTER
class: MAIN
stages:
- name: stage 1
script: echo "stage 1"
- name: stage 2
script: echo "stage 2"
- name: stage 3
script: echo "stage 3"
failStages:
- name: fail stage 1
script: echo "fail stage 1"
- name: fail stage 2
script: echo "fail stage 2"
endStages:
- name: end stage 1
script: echo "end stage 1"
- name: end stage 2
script: echo "end stage 2"
ifModify:
- a.txt
- "src/**/*"
retry: 3
allowFailure: false
name
- type:
String
Specify the pipeline name, default is pipeline
. When there are multiple parallel pipelines, the default pipeline names are pipeline
, pipeline-1
, pipeline-2
, and so on. You can define name
to distinguish different pipelines.
runner
- type:
Object
Specify parameters related to the build node.
tags
: Optional, specify which tags the build node should have.cpus
: Optional, specify the number of CPU cores to use for the build.
tags
- type:
String
|Array<String>
- default:
cnb:arch:default
Specify which tags the build node should have. See Build Node for details.
Example:
main:
push:
- runner:
tags: cnb:arch:amd64
stages:
- name: uname
script: uname -a
cpus
- type:
Number
Specify the maximum number of CPU cores to use for the build (memory = CPU cores * 2 GB), where CPU and memory do not exceed the actual size of the runner machine.
If not configured, the maximum available CPU cores are determined by the runner machine configuration.
Example:
# cpus = 1, memory = 2G
main:
push:
- runner:
cpus: 1
stages:
- name: echo
script: echo "hello world"
docker
- type:
Object
Specify parameters related to docker
. See Build Environment for details.
image
: The environment image for the currentPipeline
. All tasks under thisPipeline
will be executed in this image environment.build
: Specify aDockerfile
to build a temporary image, used as the value forimage
.volumes
: Declare data volumes for caching scenarios.
image
- type:
Object
|String
Specify the environment image for the current Pipeline
. All tasks under this Pipeline
will be executed in this image environment, supporting environment variable references.
image.name
:String
Image name, e.g.,node:20
.image.dockerUser
:String
Specify the Docker username for pulling the specified image.image.dockerPassword
:String
Specify the Docker password for pulling the specified image.
If image
is specified as a string, it is equivalent to specifying image.name
.
Example 1, using a public image:
main:
push:
- docker:
# Use the node:20 image from the official Docker repository as the build container
image: node:20
stages:
- name: show version
script: node -v
Example 2, using a private image from CNB Artifact:
main:
push:
- docker:
# Use a non-public image as the build container, requiring Docker username and password
image:
name: docker.cnb.cool/images/pipeline-env:1.0
# Use environment variables injected by default during CI builds
dockerUser: $CNB_TOKEN_USER_NAME
dockerPassword: $CNB_TOKEN
stages:
- name: echo
script: echo "hello world"
Example 3, using a private image from the official Docker repository:
main:
push:
- imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/docker.yml
docker:
# Use a non-public image as the build container, requiring Docker username and password
image:
name: images/pipeline-env:1.0
# Environment variables imported from docker.yml
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- name: echo
script: echo "hello world"
docker.yml
DOCKER_USER: user
DOCKER_PASSWORD: password
build
- type:
Object
|String
Specify a Dockerfile
to build a temporary image, used as the value for image
, supporting environment variable references.
For a complete example of declaring a build environment using build
, please refer to docker-build-with-by.
Below is an explanation of each parameter under build
:
build.dockerfile
:- type:
String
Path to the
Dockerfile
.- type:
build.target
:- type:
String
Corresponds to the
--target
parameter indocker build
, allowing selective building of specific stages in the Dockerfile instead of the entire Dockerfile.- type:
build.by
:- type:
Array<String>
|String
Declare a list of files that the build process depends on for caching.
Note: Files not listed in
by
are treated as non-existent during the image build process, except for the Dockerfile.If
String
, multiple files can be separated by commas.- type:
build.versionBy
:- type:
Array<String>
|String
Used for version control. Changes to the content of the specified files will trigger a new version. The calculation logic follows this expression:
sha1(dockerfile + versionBy + buildArgs)
.If
String
, multiple files can be separated by commas.- type:
build.buildArgs
:- type:
Object
Insert additional build parameters during the build (
--build-arg $key=$value
). If the value isnull
, only the key is added (--build-arg $key
).- type:
build.ignoreBuildArgsInVersion
:- type:
Boolean
Whether to ignore
buildArgs
in version calculation. SeeversionBy
.- type:
build.sync
:- type:
String
Whether to wait for
docker push
to complete before continuing. Default isfalse
.- type:
If build
is specified as a string, it is equivalent to specifying build.dockerfile
.
Dockerfile Usage:
main:
push:
- docker:
# Specify the build environment via Dockerfile
build: ./image/Dockerfile
stages:
- stage1
- stage2
- stage3
main:
push:
- docker:
# Specify the build environment via Dockerfile
build:
dockerfile: ./image/Dockerfile
dockerImageName: cnb.cool/project/images/pipeline-env
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- stage1
- stage2
- stage3
main:
push:
- docker:
# Specify the build environment via Dockerfile
build:
dockerfile: ./image/Dockerfile
# Only build the builder stage, not the entire Dockerfile
target: builder
stages:
- stage1
- stage2
- stage3
Dockerfile versionBy Usage:
Example: Cache pnpm in the environment image to speed up subsequent pnpm i
processes
main:
push:
# Specify the build environment via Dockerfile
- docker:
build:
dockerfile: ./Dockerfile
versionBy:
- package-lock.json
stages:
- name: pnpm i
script: pnpm i
- stage1
- stage2
- stage3
FROM node:20
# Replace with the actual source
RUN npm config set registry https://xxx.com/npm/ \
&& npm i -g pnpm \
&& pnpm config set store-dir /lib/pnpm
WORKDIR /data/orange-ci/workspace
COPY package.json package-lock.json ./
RUN pnpm i
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 100
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 141
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 272
[pnpm i] Progress: resolved 445, reused 444, downloaded 0, added 444, done
[pnpm i]
[pnpm i] dependencies:
[pnpm i] + mocha 8.4.0 (10.0.0 is available)
[pnpm i]
[pnpm i] devDependencies:
[pnpm i] + babel-eslint 9.0.0 (10.1.0 is available)
[pnpm i] + eslint 5.16.0 (8.23.0 is available)
[pnpm i] + jest 27.5.1 (29.0.2 is available)
[pnpm i]
[pnpm i]
[pnpm i] Job finished, duration: 6.8s
volumes
- type:
Array<String>
|String
Declare data volumes. Multiple volumes can be passed as an array or separated by commas, supporting environment variable references. Supported formats:
<group>:<path>:<type>
<path>:<type>
<path>
Meanings:
group
: Optional, volume group. Different groups are isolated from each other.path
: Required, absolute path to mount the volume. Supports absolute paths (starting with/
) or relative paths (starting with./
), relative to the workspace.type
: Optional, volume type. Default iscopy-on-write
. Supported types:read-write
orrw
: Read-write. Concurrent write conflicts must be handled manually. Suitable for serial build scenarios.read-only
orro
: Read-only. Write operations throw exceptions.copy-on-write
orcow
: Read-write. Changes (add, modify, delete) are merged after a successful build. Suitable for concurrent build scenarios.copy-on-write-read-only
: Read-only. Changes (add, delete, modify) are discarded after the build ends.data
: Create a temporary data volume, automatically cleaned up after the pipeline ends.
copy-on-write
Used for caching scenarios, supports concurrency.
copy-on-write
technology allows the system to share the same data copy until modifications are needed, enabling efficient cache replication. In concurrent environments, this method avoids read-write conflicts because private copies of data are only created when modifications are actually needed. Thus, only write operations cause data replication, while read operations can safely proceed in parallel without worrying about data consistency. This mechanism significantly improves performance, especially in read-heavy caching scenarios.
data
Used for data sharing, sharing specified directories in the container with other containers.
By creating a data volume and mounting it into each container. Unlike directly mounting directories from the build node into the container, if the specified directory already exists in the container, its contents are automatically copied to the data volume instead of directly overwriting the container directory.
volumes Examples
Example 1: Mount directories from the build node into the container for local caching
main:
push:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use cache and update simultaneously
- /root/.npm
# Use main cache and update simultaneously
- main:/root/.gradle:copy-on-write
stages:
- stage1
- stage2
- stage3
pull_request:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use copy-on-write cache
- /root/.npm
- node_modules
# PR uses main cache but does not update
- main:/root/.gradle:copy-on-write-read-only
stages:
- stage1
- stage2
- stage3
Example 2: Share files packaged in the container with other containers
# .cnb.yml
main:
push:
- docker:
image: go-app-cli # Assume a Go application is in the /go-app/cli path in the image
# Declare data volumes
volumes:
# This path exists in the go-app-cli image, so when the environment image is executed, its contents are copied to a temporary data volume for sharing with other task containers
- /go-app
stages:
- name: show /go-app-cli in job container
image: alpine
script: ls /go-app
git
- type:
Object
Provides Git repository-related configurations.
git.enable
- type:
Boolean
- default:
true
Specifies whether to fetch the code.
For branch.delete
events, the default is false
. For other events, the default is true
.
git.submodules
- type:
Object
|Boolean
- default:
true
Specifies whether to fetch submodules.
Supports Object
format to specify specific parameters. If fields are omitted, the default values are:
{
"enable": true,
"remote": false
}
Basic Usage:
main:
push:
- git:
enable: true
submodules: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
submodules:
enable: true
remote: true
stages:
- name: echo
script: echo "hello world"
git.submodules.enable
Specifies whether to fetch submodules.
git.submodules.remote
Whether to add the --remote
parameter when executing git submodule update
, to fetch the latest code for submodule
each time.
git.lfs
- type:
Object
|Boolean
- default:
true
Specifies whether to fetch LFS files.
Supports Object
format to specify specific parameters. If fields are omitted, the default values are:
{
"enable": true
}
Basic Usage:
main:
push:
- git:
enable: true
lfs: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
lfs:
enable: true
stages:
- name: echo
script: echo "hello world"
git.lfs.enable
Specifies whether to fetch LFS files.
services
- type:
Array<String>
Declares services required during the build, in the format: name:[version]
, where version
is optional.
Currently supported services:
- docker
- vscode
service:docker
Enables the dind
service.
When operations like docker build
or docker login
are needed during the build, declare this to automatically inject docker daemon
and docker cli
into the environment.
Example:
main:
push:
- services:
- docker
docker:
image: alpine
stages:
- name: docker info
script:
- docker info
- docker ps
This service automatically logs into the CNB Docker Artifact repository's image source (docker.cnb.cool
). Subsequent tasks can directly docker push
to the current repository's Docker Artifact.
Example:
main:
push:
- services:
- docker
stages:
- name: build and push
script: |
# Dockerfile exists in the root directory
docker build -t ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
service:vscode
Declared when remote development is needed.
Example:
$:
vscode:
- services:
- vscode
- docker
docker:
image: alpine
stages:
- name: uname
script: uname -a
env
- type:
Object
Specifies environment variables. Defines a set of environment variables for use during task execution. Effective for all non-plugin tasks in the current Pipeline
.
imports
- type:
Array<String>
|String
Specifies the path to a CNB Git repository file, which can be read as a source of environment variables. Local paths like ./env.yml
will be concatenated into remote file addresses for loading.
Typically used to store account passwords for services like npm
or docker
in a secret repository.
Example:
#env.yml
DOCKER_USER: "username"
DOCKER_TOKEN: "token"
DOCKER_REGISTRY: "https://xxx/xxx"
#.cnb.yml
main:
push:
- services:
- docker
imports:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/env.yml
stages:
- name: docker push
script: |
docker login -u ${DOCKER_USER} -p "${DOCKER_TOKEN}" ${CNB_DOCKER_REGISTRY}
docker build -t ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
Note: Not effective for plugin tasks.
Cloud Native Build
now supports secret repositories, which are more secure and support file reference auditing.
Supported file formats:
yaml
: Parses files with.yml
or.yaml
extensions.json
: Parses files with.json
extensions.plain
: Each line is inkey=value
format. All other extensions are parsed this way. (Not recommended)
Priority for duplicate keys:
- When
imports
is an array, duplicate parameters are overwritten by later configurations. - If a parameter duplicates one in
env
, theenv
parameter will override the one from theimports
file.
Variable Assignment
Paths in imports
files can reference environment variables. If an array, subsequent file paths can reference variables from earlier files.
// env.json
{
FILE: "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/env2.yml"
}
# env2.yml
TEST_TOKEN: some token
main:
push:
- imports:
- ./env1.json
- $FILE
stages:
- name: echo
script: echo $TEST_TOKEN
Referenced files can declare accessible scopes. See Configuration File Reference Authentication.
Example:
team_name/project_name/*
, matching all repositories under a project:
key: value
allow_slugs:
- team_name/project_name/*
Allow references from all repositories:
key: value
allow_slugs:
- "**"
Most configuration files are simple single-layer objects, such as:
// env.json
{
"token": "private token",
"password": "private password"
}
To handle complex configuration files and scenarios, imports
supports nested objects! If the parsed object from the imported file contains deep properties (the first layer cannot be an array), it will be flattened into a single-layer object with the following rules:
- Property names are retained, and property values are converted to strings.
- If a property value is an object (including arrays), it will be recursively flattened, with property paths connected by
_
.
// env.json
{
"key1": [
"value1",
"value2"
],
"key2": {
"subkey1": [
"value3",
"value4"
],
"subkey2": "value5"
},
"key3": [
"value6",
{
"subsubkey1": "value7"
}
],
"key4": "value8"
}
Will be flattened into:
{
// Original property values converted to strings
"key1": "value1,value2",
// If a property value is an object, additional recursive flattening is performed to add properties
"key1_0": "value1",
"key1_1": "value2",
"key2": "[object Object]",
"key2_subkey1": "value3,value4",
"key2_subkey1_0": "value3",
"key2_subkey1_1": "value4",
"key2_subkey2": "value5",
"key3": "value6,[object Object]",
"key3_0": "value6",
"key3_1": "[object Object]",
"key3_1_subsubkey1": "value7",
"key4": "value8"
}
main:
push:
- imports:
- ./env.json
stages:
- name: echo
script: echo $key3_1_subsubkey1
label
- type:
Object
Assigns labels to the pipeline. Each label value can be a string or an array of strings. These labels can be used for subsequent pipeline record filtering and other functions.
Here is an example workflow: Merge the main branch to release to the pre-release environment, and tag to release to the production environment.
main:
push:
- label:
# Regular pipeline for the Master branch
type:
- MASTER
- PREVIEW
stages:
- name: install
script: npm install
- name: CCK-lint
script: npm run lint
- name: BVT-build
script: npm run build
- name: UT-test
script: npm run test
- name: pre release
script: ./pre-release.sh
$:
tag_push:
- label:
# Regular pipeline for the product release branch
type: RELEASE
stages:
- name: install
script: npm install
- name: build
script: npm run build
- name: DELIVERY-release
script: ./release.sh
stages
- type:
Array<Stage|Job>
Defines a set of stage tasks, each executed sequentially.
failStages
- type:
Array<Stage|Job>
Defines a set of failure stage tasks. Executed sequentially when the normal flow fails.
endStages
- type:
Array<Stage|Job>
Defines a set of tasks executed at the end of the pipeline. After the pipeline stages/failStages complete, these tasks are executed sequentially before the pipeline ends.
If the pipeline prepare stage succeeds, endStages will execute regardless of whether stages succeed. The success of endStages does not affect the pipeline status (i.e., endStages can fail while the pipeline status is success).
ifNewBranch
- type:
Boolean
- default:
false
If true
, the Pipeline
executes only if the current branch is new (i.e., CNB_IS_NEW_BRANCH
is true
).
If both
ifNewBranch
andifModify
exist, thePipeline
executes if either condition is met.
ifModify
- type:
Array<String>
|String
Specifies that the Pipeline
executes only if the specified files are modified. A glob
expression string or string array.
Example 1:
Executes if the modified file list includes a.js
or b.js
.
ifModify:
- a.js
- b.js
Example 2:
Executes if the modified file list includes files with the js
extension. **/*.js
matches all js
files in subdirectories, *.js
matches all js
files in the root directory.
ifModify:
- "**/*.js"
- "*.js"
Example 3:
Reverse matching, excluding the legacy
directory and all Markdown files, triggering on other file changes.
ifModify:
- "**"
- "!(legacy/**)"
- "!(**/*.md)"
- "!*.md"
Example 4:
Reverse matching, triggering if there are changes in the src
directory except for src/legacy
.
ifModify:
- "src/**"
- "!(src/legacy/**)"
Supported Events
push
events for non-new branches, comparingbefore
andafter
to count modified files.- Events triggered via
cnb:apply
inpush
event pipelines for non-new branches, following the same modified file counting rules. - Events triggered by
PR
, counting modified files in thePR
. - Events triggered via
cnb:apply
inPR
-triggered events, counting modified files in thePR
.
Because file changes can be numerous, modified file counting is limited to a maximum of 300.
breakIfModify
- type:
Boolean
- default:
false
Terminates the build if the source branch is updated before the Job
executes.
skipIfModify
- type:
Boolean
- default:
false
Skips the current Job
if the source branch is updated before execution.
retry
- type:
Number
- default:
0
Number of retries on failure. 0
means no retries.
allowFailure
- type:
Boolean
- default:
false
Whether the current pipeline is allowed to fail.
When set to true
, the pipeline's failure status will not be reported to CNB.
lock
- type:
Object
|Boolean
Sets a lock for the pipeline
. The lock is automatically released after the pipeline
completes. Locks cannot be used across repositories.
Behavior: After pipeline A acquires the lock, pipeline B requests the lock. It can either terminate A or wait for A to release the lock before acquiring it and continuing.
key:
- type:
String
Custom lock name. Default is
branch name-pipeline name
, meaning the lock scope is the currentpipeline
.- type:
expires:
- type:
Number
- default:
3600
(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
timeout:
- type:
Number
- default:
3600
(one hour)
Timeout duration for waiting for the lock, in seconds.
- type:
cancel-in-progress:
- type:
Boolean
- default:
false
Whether to terminate pipelines occupying or waiting for the lock, allowing the current pipeline to acquire the lock and execute.
- type:
wait:
- type:
Boolean
- default:
false
Whether to wait if the lock is occupied (without consuming pipeline resources or time). If
false
, an error is thrown immediately. Cannot be used withcancel-in-progress
.- type:
cancel-in-wait:
- type:
Boolean
- default:
false
Whether to terminate pipelines waiting for the lock, allowing the current pipeline to join the lock queue. Requires the
wait
property.- type:
If lock
is true
, key
, expires
, timeout
, cancel-in-progress
, wait
, and cancel-in-wait
take their default values.
Example 1: lock
as a Boolean
main:
push:
- lock: true
stages:
- name: stage1
script: echo "stage1"
Example 2: lock
as an Object
main:
push:
- lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Maximum wait of 1 minute
stages:
- name: stage1
script: echo "stage1"
Example 3: Terminate the currently running pipeline under pull_request
main:
pull_request:
- lock:
key: pr
cancel-in-progress: true
stages:
- name: echo hello
script: echo "stage1"
Stage
- type:
Job
|Object<name: Job>
Stage
represents a build stage, which can consist of one or more Jobs
. See Job Introduction.
Single Job
If a Stage
has only one Job
, the Stage
can be omitted, and the Job
can be written directly.
stages:
- name: stage1
jobs:
- name: job A
script: echo hello
Can be simplified to:
- stages:
- name: job A
script: echo hello
When Job
is a string, it can be treated as a script task, with name
and script
set to the string. Further simplified:
- stages:
- echo hello
Serial Jobs
When the value is an array (ordered), the Jobs
in the group will execute sequentially.
# Serial
stages:
- name: install
jobs:
- name: job1
script: echo "job1"
- name: job2
script: echo "job2"
Parallel Jobs
When the value is an object (unordered), the Jobs
in the group will execute in parallel.
# Parallel
stages:
- name: install
jobs:
job1:
script: echo "job1"
job2:
script: echo "job2"
Multiple Jobs
can be flexibly organized in serial or parallel. Example of serial followed by parallel:
main:
push:
- stages:
- name: serial first
script: echo "serial"
- name: parallel
jobs:
parallel job 1:
script: echo "1"
parallel job 2:
script: echo "2"
- name: serial next
script: echo "serial next"
name
- type:
String
Stage
name.
ifNewBranch
- type:
Boolean
- default:
false
If true
, the Stage
executes only if the current branch is new (i.e., CNB_IS_NEW_BRANCH
is true
).
If
ifNewBranch
,ifModify
, orif
conditions are met, theStage
will execute.
ifModify
- type:
Array<String>
|String
Specifies that the Stage
executes only if the specified files are modified. A glob
matching expression string or string array.
if
- type:
Array<String>
|String
One or more Shell scripts. The exit code determines whether the Stage
executes. If the exit code is 0
, the step will execute.
Example 1: Check the value of a variable
main:
push:
- env:
IS_NEW: true
stages:
- name: is new
if: |
[ "$IS_NEW" = "true" ]
script: echo is new
- name: is not new
if: |
[ "$IS_NEW" != "true" ]
script: echo not new
Example 2: Check the output of a task
main:
push:
- stages:
- name: make info
script: echo 'haha'
exports:
info: RESULT
- name: run if RESULT is haha
if: |
[ "$RESULT" = "haha" ]
script: echo $RESULT
env
- type:
Object
Same as Pipeline env, but only effective for the current Stage
.
Stage env
has higher priority than Pipeline env
.
imports
- type:
Array<String>
|String
Same as Pipeline imports, but only effective for the current Stage
.
retry
- type:
Number
- default:
0
Number of retries on failure. 0
means no retries.
lock
- type:
Boolean
|Object
Sets a lock for the Stage
. The lock is automatically released after the Stage
completes. Locks cannot be used across repositories.
Behavior: After task A acquires the lock, task B requests the lock and must wait for the lock to be released before acquiring it and continuing.
lock.key
- type:
String
Custom lock name. Default is
branch name-pipeline name-stage index
.- type:
lock.expires
- type:
Number
- default:
3600
(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
lock.wait
- type:
Boolean
- default:
false
Whether to wait if the lock is occupied.
- type:
lock.timeout
- type:
Number
- default:
3600
(one hour)
Specifies the timeout duration for waiting for the lock, in seconds.
- type:
If lock
is true
, key
, expires
, timeout
, cancel-in-progress
, wait
, and cancel-in-wait
take their default values.
Example 1: lock
as a Boolean
main:
push:
- stages:
- name: stage1
lock: true
jobs:
- name: job1
script: echo "job1"
Example 2: lock
as an Object
main:
push:
- stages:
- name: stage1
lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Maximum wait of 1 minute
jobs:
- name: job1
script: echo "job1"
image
- type:
Object
|String
Specifies the environment image for the current Stage
. All tasks in this Stage
will default to executing in this image environment.
image.name
:String
Image name, e.g.,node:20
.image.dockerUser
:String
Docker username for pulling the specified image.image.dockerPassword
:String
Docker password for pulling the specified image.
If image
is a string, it is equivalent to specifying image.name
.
jobs
- type:
Array<Job>
|Object<name,Job>
Defines a group of tasks, each executed sequentially or in parallel.
- If the value is an array (ordered), the
Jobs
will execute sequentially. - If the value is an object (unordered), the
Jobs
will execute in parallel.
Job
Job
is the most basic task execution unit, divided into three categories:
Built-in Tasks
type:
- type:
String
Specifies the built-in task to execute.
- type:
options:
- type:
Object
Specifies parameters for the built-in task.
- type:
optionsFrom:
Array<String>
|String
Specifies local or Git repository file paths to load as built-in task parameters. Similar to
imports
, ifoptionsFrom
is an array, duplicate parameters are overwritten by later configurations.
options
fields have higher priority than optionsFrom
.
Reference file permission control: Configuration File Reference Authentication.
Example:
name: install
type: INTERNAL_JOB_NAME
optionsFrom: ./options.json
options:
key1: value1
key2: value2
// ./options.json
{
"key1": "value1",
"key2": "value2"
}
Script Tasks
- name: install
script: npm install
script:
- type:
Array<String>
|String
Specifies the
shell
script to execute. Arrays are joined with&&
by default.If the
script
should run in its own environment rather than the pipeline's environment, specify the runtime environment via theimage
property.- type:
image:
- type:
String
Specifies the runtime environment.
- type:
Example:
- name: install
image: node:20
script: npm install
Script tasks can be simplified to a string, where script
is the string and name
is the first line:
- echo hello
Equivalent to:
- name: echo hello
script: echo hello
Plugin Tasks
Plugins are Docker images, also called image tasks.
Unlike the above two types, plugin tasks
offer more flexible execution environments. They are easier to share within teams, companies, or even across CI systems.
Plugin tasks
pass environment variables to ENTRYPOINT
to hide internal implementations.
Note: Custom environment variables set via imports
, env
, etc., are not passed to plugins but can be used in settings
or args
for variable substitution. CNB system environment variables are still passed to plugins.
name:
- type:
String
Specifies the
Job
name.- type:
image:
- type:
String
The full path of the image.
- type:
settings:
- type:
Object
Specifies plugin task parameters. Follow the documentation provided by the image. Environment variables can be referenced via
$VAR
or${VAR}
.- type:
settingsFrom:
type:
Array<String>
|String
Specifies local or Git repository file paths to load as plugin task parameters.
Priority:
- Duplicate parameters are overwritten by later configurations.
settings
fields have higher priority thansettingsFrom
.
Reference file permission control: Configuration File Reference Authentication.
Example:
Restricting both images
and slugs
:
allow_slugs:
- a/b
allow_images:
- a/b
Restricting only images
, not slug
:
allow_images:
- a/b
settingsFrom
can be written in Dockerfile
:
FROM node:20
LABEL cnb.cool/settings-from="https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/settings.json"
Examples
with imports:
- name: npm publish
image: plugins/npm
imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm.json
settings:
username: $NPM_USER
password: $NPM_PASS
email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./
{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/**/**"],
"allow_images": ["plugins/npm"]
}
with settingsFrom:
- name: npm publish
image: plugins/npm
settingsFrom: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm-settings.json
settings:
# username: $NPM_USER
# password: $NPM_PASS
# email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./
{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/cnb"],
"allow_images": ["plugins/npm"]
}
name #
- type:
String
Specifies the Job
name.
ifModify #
- type:
Array<String>
|String
Same as Stage ifModify. Only effective for the current Job
.
ifNewBranch #
- type:
Boolean
- default:
false
Same as Stage ifNewBranch. Only effective for the current Job
.
if #
- type:
Array<String>
|String
Same as Stage if. Only effective for the current Job
.
breakIfModify #
- type:
Boolean
- default:
false
Same as Pipeline breakIfModify. Only effective for the current Job
.
skipIfModify #
- type:
Boolean
- default:
false
Skips the current Job
if the source branch is updated before execution.
env #
- type:
Object
Same as Stage env, but only effective for the current Job
.
Job env
has higher priority than Pipeline env
and Stage env
.
imports #
- type:
Array<String>
|String
Same as Stage imports, but only effective for the current Job
.
exports #
- type:
Object
After Job
execution, a result
object is generated. exports
can export properties from result
to environment variables, with a lifecycle of the current Pipeline
.
See Environment Variables for details.
timeout
- type:
Number
|String
Sets a timeout for a single task. Default is 1 hour, maximum is 12 hours.
Effective for script-job
and image-job
.
Also supports the following units:
ms
: Milliseconds (default)s
: Secondsm
: Minutesh
: Hours
name: timeout job
script: sleep 1d
timeout: 100s # Task will timeout and exit after 100 seconds
See Timeout Strategy for details.
allowFailure #
- type:
Boolean
|String
- default:
false
If true
, failure of this step does not affect subsequent execution or the final result.
If String
, environment variables can be read.
lock #
- type:
Object
|Boolean
Sets a lock for the Job
. The lock is automatically released after the Job
completes. Locks cannot be used across repositories.
Behavior: After task A acquires the lock, task B requests the lock and must wait for the lock to be released before acquiring it and continuing.
lock.key
- type:
String
Custom lock name. Default is
branch name-pipeline name-stage index-job name
.- type:
lock.expires
- type:
Number
- default:
3600
(one hour)
Lock expiration time, after which the lock is automatically released, in seconds.
- type:
lock.wait
- type:
Boolean
- default:
false
Whether to wait if the lock is occupied.
- type:
lock.timeout
- type:
Number
- default:
3600
(one hour)
Specifies the timeout duration for waiting for the lock, in seconds.
- type:
If lock
is true
, key
, expires
, timeout
, cancel-in-progress
, wait
, and cancel-in-wait
take their default values.
Example 1: lock
as a Boolean
name: Lock
lock: true
script: echo 'job lock'
Example 2: lock
as an Object
name: Lock
lock:
key: key
expires: 10
wait: true
script: echo 'job lock'
retry #
- type:
Number
- default:
0
Number of retries on failure. 0
means no retries.
type
- type:
String
Specifies the built-in task to execute.
options
- type:
Object
Specifies parameters for the built-in task.
optionsFrom
- type:
Array<String>
|String
Specifies local or Git repository file paths to load as built-in task parameters. Similar to imports
, if optionsFrom
is an array, duplicate parameters are overwritten by later configurations.
script
- type:
Array<String>
|String
Specifies the script to execute. Arrays are joined with &&
. The script's exit code determines the Job
's exit code.
Note: The default shell interpreter for the pipeline's base image is sh
. Different images may use different interpreters.
commands
- type:
Array<String>
|String
Same as script
, but with higher priority. Mainly for compatibility with Drone CI
syntax.
image #
- type:
Object
|String
Specifies the image to use as the current Job
's execution environment, for docker image as env
or docker image as plugins
.
image.name
:String
Image name, e.g.,node:20
.image.dockerUser
:String
Docker username for pulling the specified image.image.dockerPassword
:String
Docker password for pulling the specified image.
If image
is a string, it is equivalent to specifying image.name
.
settings
- type:
Object
Specifies parameters required for the plugin task. See Plugin Tasks for details.
settingsFrom
Array<String>
|String
Specifies local or Git repository file paths to load as plugin task parameters. Similar to imports
, if settingsFrom
is an array, duplicate parameters are overwritten by later configurations.
See Plugin Tasks for details.
args
Array<String>
Specifies arguments passed to the image during execution, appended to ENTRYPOINT
. Only supports arrays.
- name: npm publish
image: plugins/npm
args:
- ls
Will execute:
docker run plugins/npm ls
Task Exit Codes
- 0: Task succeeds, continue execution.
- 78: Task succeeds but interrupts the current
Pipeline
. Can be used in custom scripts withexit 78
to interrupt the pipeline. - Other:
Number
, task fails and interrupts the currentPipeline
.
Configuration Reuse
include
Using the include
parameter, you can import files from the current or other repositories into the current file. This allows configuration files to be split for easier reuse and maintenance.
Usage Example
template.yml
# template.yml
main:
push:
pipeline_2:
env:
ENV_KEY1: xxx
ENV_KEY3: inner
services:
- docker
stages:
- name: echo
script: echo 222
.cnb.yml
# .cnb.yml
include:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/template.yml
main:
push:
pipeline_1:
stages:
- name: echo
script: echo 111
pipeline_2:
env:
ENV_KEY2: xxx
ENV_KEY3: outer
stages:
- name: echo
script: echo 333
Merged Configuration
main:
push:
pipeline_1: # Key does not exist, added during merge
stages:
- name: echo
script: echo 111
pipeline_2:
env:
ENV_KEY1: xxx
ENV_KEY2: xxx # Key does not exist, added during merge
ENV_KEY3: outer # Duplicate key, overwritten during merge
services:
- docker
stages: # Arrays are appended during merge
- name: echo
script: echo 222
- name: echo
script: echo 333
Syntax Explanation
include:
# 1. Directly pass the configuration file path
- "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/template.yml"
- "template.yml"
# 2. Pass an object
# path: Configuration file path
# ignoreError: Whether to throw an error if the file is not found. true: no error; false: throw error. Default is false
- path: "template1.yml"
ignoreError: true
# 3. Pass an object
# config: Pass yaml configuration
- config:
main:
push:
- stages:
- name: echo
script: echo "hello world"
Different file pipeline configurations are merged using object merge
:
- Array + Array: Elements are appended.
- Object + Object: Duplicate keys are overwritten.
- Array + Object: Only the array is retained.
- Object + Array: Only the array is retained.
Reference file permission control: Configuration File Reference Authentication
Tips
- Local
.cnb.yml
overridesinclude
configurations. Laterinclude
configurations override earlier ones. - Nested
include
is supported. Local file paths ininclude
are relative to the project root. - Maximum of 50 configuration files can be included.
- Files in
submodule
cannot be referenced. - YAML anchors cannot be used across files.
!reference
Cloud Native Build
extends YAML
with the custom tag reference
to reference variable values by property path, supporting cross-file usage with include.
Tips
- First-layer duplicate variables are overwritten, not merged. Local
.cnb.yml
overridesinclude
variables. Laterinclude
variables override earlier ones. reference
supports nested references, up to 10 layers.
Example #
a.yml
.val1:
echo1: echo hello
.val2:
friends:
- one:
name: tom
say: !reference [.val1, echo1]
.cnb.yml
include:
- ./a.yml
.val3:
size: 100
main:
push:
- stages:
- name: echo hello
script: !reference [.val2, friends, "0", say]
- name: echo size
env:
SIZE: !reference [".val3", "size"]
script: echo my size ${SIZE}
Equivalent to:
main:
push:
- stages:
- name: echo hello
script: echo hello
- name: echo size
env:
SIZE: 100
script: echo my size ${SIZE}
Advanced Example
Pipelines can be referenced as complete configurations:
.common-pipeline:
- stages:
- name: echo
script: echo hello
main:
push: !reference [.common-pipeline]
test:
push: !reference [.common-pipeline]
Equivalent to:
main:
push:
- stages:
- name: echo
script: echo hello
test:
push:
- stages:
- name: echo
script: echo hello
VSCode Configuration
After installing the VSCode YAML
plugin, to avoid errors when writing YAML
files with the custom reference
tag, configure as follows:
setting.json
{
"yaml.customTags": ["!reference sequence"]
}
Tips
To prevent the YAML
plugin from treating first-layer variable names as branch names, prefix reference
variable names with .
, e.g., .var
.