Syntax Guide
About 6720 wordsAbout 22 min
Overview
CNB pipelines define automated build processes through the .cnb.yml configuration file.
Basic Structure
CNB pipelines adopt a hierarchical structure, from outer to inner:
- Trigger Branch: Specifies which branches will trigger the pipeline (e.g.,
main,dev) - Trigger Event: Specifies what operations will trigger the pipeline (e.g.,
push,pull_request) - Pipeline: A complete build process containing multiple stages
- Stage: A step in the pipeline that can contain one or more jobs
- Job: The smallest execution unit that executes specific commands or plugins
Execution Flow Diagram
Trigger Branch (main)
└─ Trigger Event (push)
├─ Pipeline
│ └─ Stage
│ └─ Job
└─ Pipeline
└─ Stage
└─ JobComplete Example
Here is a complete example containing all levels:
main: # Trigger Branch: triggers on main branch
push: # Trigger Event: triggers on code push
- name: pipeline-1 # Pipeline name
stages:
- name: stage-1 # Stage name
jobs:
- name: job-1 # Job name
script: echo "in pipeline-1"
- name: pipeline-2 # Pipeline name
stages:
- name: stage-1 # Stage name
jobs:
- name: job-1 # Job name
script: echo "in pipeline-2"You can also declare multiple pipelines using object form:
main: # Trigger Branch
push: # Trigger Event
pipeline-1: # Pipeline identifier
stages:
- name: stage-1
jobs:
- name: job-1
script: echo "Hello World"
pipeline-2: # Pipeline identifier
stages:
- name: stage-1
jobs:
- name: job-1
script: echo "Hello World"Trigger Branch
Purpose: Specifies which branches will trigger the pipeline.
Type: String
Supported Patterns:
- Exact Match:
main,dev,release - Wildcard Match (glob syntax):
feature/*,hotfix/* - Fallback Match: Use
$to match all branches not explicitly specified
Example:
# Trigger only on main branch
main:
push:
- stages:
- echo "main branch"
# Match all branches starting with feature
feature/*:
push:
- stages:
- echo "feature branch"
# Fallback: match all other branches
$:
push:
- stages:
- echo "other branches"More Details: Trigger Mechanism#Trigger Branch
Trigger Event
Purpose: Specifies what operations will trigger pipeline execution.
Common Events:
push: Triggered when code is pushed to a branchpull_request: Triggered when a Pull Request is created or updatedtag_push: Triggered when a tag is pushedbranch.delete: Triggered when a branch is deleted
Example:
main:
# Run tests on code push
push:
- stages:
- npm test
# Run code checks on PR
pull_request:
- stages:
- npm run lintMore Details: Trigger Mechanism#Trigger Event
Pipeline
Pipeline represents a pipeline containing one or more Stages, each Stage executes sequentially.
Here is a complete example containing all available configuration options (choose as needed in actual use):
name: Pipeline Name
docker:
image: node
build: dev/Dockerfile
volumes:
- /root/.npm:copy-on-write
git:
enable: true
submodules: true
lfs: true
services:
- docker
env:
TEST_KEY: TEST_VALUE
imports:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/envs.yml
- ./env.txt
label:
type: MASTER
class: MAIN
stages:
- name: stage 1
script: echo "stage 1"
- name: stage 2
script: echo "stage 2"
- name: stage 3
script: echo "stage 3"
failStages:
- name: fail stage 1
script: echo "fail stage 1"
- name: fail stage 2
script: echo "fail stage 2"
endStages:
- name: end stage 1
script: echo "end stage 1"
- name: end stage 2
script: echo "end stage 2"
ifModify:
- a.txt
- "src/**/*"
retry: 3
allowFailure: falsename
- type:
String
Specifies the pipeline name, defaults to pipeline. When there are multiple parallel pipelines, the default pipeline names are pipeline, pipeline-1, pipeline-2, and so on. You can define name to specify a pipeline name to distinguish different pipelines.
runner
- type:
Object
Specifies build node related parameters.
tags: Optional, specifies which tags the build node should havecpus: Optional, specifies the number of CPU cores required for the build
tags
- type:
String|Array<String> - default:
cnb:arch:default
Specifies which tags the build node should have. See Build Node for details.
Example:
main:
push:
- runner:
tags: cnb:arch:amd64
stages:
- name: uname
script: uname -acpus
- type:
Number
Specifies the maximum number of CPU cores required for the build (memory = CPU cores * 2 GB), where CPU and memory do not exceed the actual size of the runner machine.
If not configured, the maximum available CPU cores are specified by the allocated runner machine configuration.
Example:
# cpus = 1,memory = 2G
main:
push:
- runner:
cpus: 1
stages:
- name: echo
script: echo "hello world"docker
- type:
Object
Specifies docker related parameters. See Build Environment for details.
image: The environment image for the currentPipeline. All tasks under the currentPipelinewill execute in this image environment.build: Specifies aDockerfileto build a temporary image to use as theimagevalue.devcontainer: Specifies the path to thedevcontainer.jsonfile, which will be used as the pipeline container image.volumes: Declares data volumes for caching scenarios.
Note: When image, build, and devcontainer are specified simultaneously, the priority is build > devcontainer > image.
image
- type:
Object|String
Specifies the environment image for the current Pipeline. All tasks under the current Pipeline will execute in this image environment.
This property and its sub-properties support environment variable references. See Variable Substitution.
image.name:StringImage name, e.g.,node:20.image.dockerUser:StringSpecifies the Docker username for pulling the specified image.image.dockerPassword:StringSpecifies the Docker password for pulling the specified image.
If image is specified as a string, it is equivalent to specifying image.name.
If using an image from the Cloud Native Build Docker registry and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
Example 1, using a public image:
main:
push:
- docker:
# Use node:20 image from Docker official registry as build container
image: node:20
stages:
- name: show version
script: node -vExample 2, using CNB registry private image:
main:
push:
- docker:
# Use private image as build container, requires Docker username and password
image:
name: docker.cnb.cool/images/pipeline-env:1.0
# Use environment variables injected by default during CI build
dockerUser: $CNB_TOKEN_USER_NAME
dockerPassword: $CNB_TOKEN
stages:
- name: echo
script: echo "hello world"Example 3, using Docker official registry private image:
main:
push:
- imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/docker.yml
docker:
# Use private image as build container, requires Docker username and password
image:
name: images/pipeline-env:1.0
# Environment variables imported from docker.yml
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- name: echo
script: echo "hello world"docker.yml
DOCKER_USER: user
DOCKER_PASSWORD: passwordbuild
- type:
Object|String
Specifies a Dockerfile to build a temporary image to use as the image value.
This property and its sub-properties support environment variable references. See Variable Substitution.
For a complete example of using build to declare the build environment, see docker-build-with-by.
Here are the descriptions of parameters under build:
build.dockerfile:- type:
String
Dockerfilepath.This property supports environment variable references. See Variable Substitution.
- type:
build.target:- type:
String
Corresponds to the --target parameter in docker build, allowing selective building of specific stages in the Dockerfile instead of building the entire Dockerfile.
- type:
build.by:- type:
Array<String>|String
Used to declare the list of files that the build process depends on for caching.
Note: Files not in the
bylist, except for the Dockerfile, are treated as non-existent during the image build process.When
Stringtype, multiple files can be separated by commas.- type:
build.versionBy:- type:
Array<String>|String
Used for version control. When the content of the specified file changes, we consider it a new version. The specific calculation logic is: sha1(dockerfile + versionBy + buildArgs).
Supports passing folder paths directly, which will use the folder's
git tree idfor version number calculation.When
Stringtype, multiple files can be separated by commas.- type:
build.buildArgs:- type:
Object
Inserts additional build arguments during build (
--build-arg $key=$value). When value is null, only the key is added (--build-arg $key).- type:
build.ignoreBuildArgsInVersion:- type:
Boolean
Whether to ignore buildArgs in version calculation. See
versionBy.- type:
build.sync:- type:
String
Whether to wait for
docker pushto succeed before continuing. Defaults tofalse.- type:
If build is specified as a string, it is equivalent to specifying build.dockerfile.
Dockerfile Usage:
main:
push:
- docker:
# `build` as string is equivalent to specifying `build.dockerfile`
build: ./image/Dockerfile
stages:
- stage1
- stage2
- stage3main:
push:
- docker:
# Specifying `build` type as `Object` allows more control over the image build process
build:
dockerfile: ./image/Dockerfile
# Only build builder, not the entire Dockerfile
target: builder
stages:
- stage1
- stage2
- stage3Dockerfile versionBy Usage:
Example: Cache pnpm in the environment image to speed up subsequent pnpm i processes
FROM node:22
RUN npm config set registry https://mirrors.cloud.tencent.com/npm/ \
&& npm i -g pnpm
WORKDIR /data/cache
COPY package.json package-lock.json ./
RUN pnpm imain:
push:
# Specify build environment via Dockerfile
- docker:
build:
dockerfile: ./Dockerfile
by:
- package.json
- package-lock.json
versionBy:
- package-lock.json
stages:
- name: cp node_modules
# Copy node_modules from container to pipeline working directory
script: cp -r /data/cache/node_modules ./
- name: check node_modules
script: |
if [ -d "node_modules" ]; then
cd node_modules
ls
else
echo "node_modules directory does not exist."
fidevcontainer
- type:
String
Specifies the path to the devcontainer.json file, which will be used as the pipeline container image.
Only supports paths relative to the current repository, such as: .devcontainer/devcontainer.json
For the specific devcontainer.json specification, see: devcontainer.json
Due to CNB platform characteristics, only limited support is currently provided. Currently supported properties:
- image
- build.dockerfile
- build.context
- build.args
- build.target
volumes
- type:
Array<String>|String
Declares data volumes. Multiple data volumes can be passed in through an array or separated by ,. Environment variables can be referenced. Supported formats:
<group>:<path>:<type><path>:<type><path>
Meaning of each item:
group: Optional, data volume group, different groups are isolated from each otherpath: Required, absolute path for data volume mounting, supports absolute path (starting with/) or relative path (starting with./), relative to workspacetype: Optional, data volume type, default value iscopy-on-write, supports the following types:read-writeorrw: Read-write, concurrent write conflicts need to be handled by yourself, suitable for serial build scenariosread-onlyorro: Read-only, write operations throw exceptionscopy-on-writeorcow: Read-write, changes (additions, modifications, deletions) are merged after pipeline succeeds, suitable for concurrent build scenarioscopy-on-write-read-only: Read-only, changes (additions, deletions, modifications) are discarded after pipeline endsdata: Creates a temporary data volume that is automatically cleaned up when the pipeline ends
copy-on-write
Used for caching scenarios, supports concurrency
copy-on-write technology allows the system to share the same data copy before needing to modify data, thus achieving efficient cache replication. In concurrent environments, this approach avoids cache read-write conflicts because a private copy of the data is only created when data actually needs to be modified. This way, only write operations cause data copying, while read operations can proceed safely in parallel without worrying about data consistency issues. This mechanism significantly improves performance, especially in cache scenarios with more reads than writes.
data
Used for sharing data, shares specified directories in containers for use in other containers.
By creating data volumes and then mounting them to various containers. Unlike directly mounting directories from the build node to containers: when the specified directory already exists in the container, the content in the container is first automatically copied to the data volume, rather than directly overwriting the directory in the container with the data volume content.
volumes Examples
Example 1: Mount directories from build node to container to achieve local caching effect
main:
push:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use cache and update simultaneously
- /root/.npm
# Use main cache and update simultaneously
- main:/root/.gradle:copy-on-write
stages:
- stage1
- stage2
- stage3
pull_request:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use copy-on-write cache
- /root/.npm
- node_modules
# PR uses main cache but doesn't update
- main:/root/.gradle:copy-on-write-read-only
stages:
- stage1
- stage2
- stage3Example 2: Share files packaged in a container for use in other containers
# .cnb.yml
main:
push:
- docker:
image: go-app-cli # Assume there's a go application at /go-app/cli path in the image
# Declare data volumes
volumes:
# This path exists in go-app-cli image, so when executing environment image, this path content will be copied to temporary data volume and can be shared for use in other task containers
- /go-app
stages:
- name: show /go-app-cli in job container
image: alpine
script: ls /go-appgit
- type:
Object
Provides Git repository related configuration.
git.enable
- type:
Boolean - default:
true
Specifies whether to pull code.
For branch.delete event, default value is false. For other events, default value is true.
git.submodules
- type:
Object|Boolean - default:
- enable:
true - remote:
false
- enable:
Specifies whether to pull submodules.
When the value is Boolean type, it is equivalent to specifying git.submodules.enable as the value of git.submodules, and git.submodules.remote as the default value false.
git.submodules.enable
- type:
Boolean - default:
true
Specifies whether to pull submodules.
git.submodules.remote
- type:
Boolean - default:
false
Whether to add the --remote parameter when executing git submodule update, used to pull the latest code of submodule each time
Basic Usage:
main:
push:
- git:
enable: true
submodules: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
submodules:
enable: true
remote: true
stages:
- name: echo
script: echo "hello world"git.lfs
- type:
Object|Boolean - default:
true
Specifies whether to pull LFS files.
Supports Object form to specify specific parameters. When fields are omitted, default values are:
{
"enable": true
}Basic Usage:
main:
push:
- git:
enable: true
lfs: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
lfs:
enable: true
stages:
- name: echo
script: echo "hello world"git.lfs.enable
Specifies whether to pull LFS files.
services
- type:
Array<String>|Array<Object>
Used to declare services needed during build, format: name:[version], version is optional.
Currently supported services:
- docker
- vscode
service:docker
Used to enable dind service.
Declare when you need to use operations like docker build, docker login during the build process, will automatically inject docker daemon and docker cli into the environment.
Example:
main:
push:
- services:
- docker
docker:
image: alpine
stages:
- name: docker info
script:
- docker info
- docker psThis service will automatically docker login to the CNB Docker registry image source (docker.cnb.cool), and in subsequent tasks you can directly docker push to the current repository's Docker registry.
Example:
main:
push:
- services:
- docker
stages:
- name: build and push
script: |
# Dockerfile exists in root directory
docker build -t ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latestIf you need to use buildx to build images for multiple architectures/platforms, you can enable the rootlessBuildkitd feature. Declare in services:
main:
push:
- docker:
image: golang:1.24
services:
- name: docker
options:
rootlessBuildkitd:
enabled: true
env:
IMAGE_TAG: ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
stages:
- name: go build
script: ./build.sh
- name: docker login
script: docker login -u ${CNB_TOKEN_USER_NAME} -p "${CNB_TOKEN}" ${CNB_DOCKER_REGISTRY}
- name: docker build and push
script: docker buildx build -t ${IMAGE_TAG} --platform linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/arm64,linux/riscv64,linux/ppc64le,linux/s390x,linux/386,linux/mips64le,linux/mips64,linux/loong64,linux/arm/v7,linux/arm/v6 --push .service:vscode
Declare when remote development is needed.
Example:
$:
vscode:
- services:
- vscode
- docker
docker:
image: alpine
stages:
- name: uname
script: uname -aIf you need to enable preview-only mode, please refer to the documentation。
If you need to specify the offline keep-alive duration for the development environment (the development environment will be recycled if it exceeds the specified duration), you can use the keepAliveTimeout parameter:
# .cnb.yml
$:
vscode:
- docker:
build: .ide/Dockerfile
services:
- docker
- name: vscode
options:
# Keep-alive duration, in milliseconds. If not set, the default is 10 minutes without a heartbeat (no http/ssh connection detected in the development environment) before shutting down the development environment.
keepAliveTimeout: 10m
# Tasks to be executed after the development environment starts
stages:
- name: ls
script: ls -alThe keepAliveTimeout parameter is defined as follows:
- Type:
Number|String(unit defaults to: milliseconds) - Default Value:
600000(ms, 10 minutes) - Note: The offline keep-alive time for the development environment. If no HTTP/SSH connections are detected within the development environment, it will automatically shut down after exceeding the set duration.
- Example:
3600000represents 1 hour.
You can directly write a number to indicate the duration in milliseconds, or use a string format with units, such as 10m for 10 minutes. The following units are supported:
ms: milliseconds (default)s: secondsm: minutesh: hours
env
- type:
Object
Declares an object as environment variables: property name is the environment variable name, property value is the environment variable value.
Used during task execution, effective for all non-plugin tasks within the current Pipeline.
Example:
main:
push:
- services:
- docker
env:
some_key: some_value
stages:
- name: some job
script: echo "$some_value"imports
- type:
Array<String>|String
Specifies CNB repository file path (file relative path or page address) as environment variable source, same effect as env.
File content will be parsed as an object, property name is the environment variable name, property value is the environment variable value.
Used during task execution, effective for all non-plugin tasks within the current Pipeline.
Local relative paths like ./env.yml will be concatenated into file page addresses for loading. That is: Files that exist locally but not remotely will not be referenced.
Cloud Native Build now supports Secret Repository, with higher security and support for file reference auditing.Generally use a secret repository to store credentials like npm, docker usernames and passwords.
Same Key Priority
- When imports is configured as an array, if duplicate parameters are encountered, later configurations will override earlier ones.
- If duplicated with parameters in
env, parameters inenvwill override those inimportsfiles.
Variable Substitution
imports file paths can read environment variables. If it's an array, file paths below can read variables from files above.
FILE: "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/env2.yml"# env2.yml
TEST_TOKEN: some tokenmain:
push:
- imports:
- ./env1.yml
# FILE is an environment variable declared in env1.yml
- $FILE
stages:
- name: echo
# TEST_TOKEN is an environment variable declared in env2.yml
script: echo $TEST_TOKENReference Authentication
Referenced files can declare accessible scope, refer to Configuration File Reference Authentication.
Example:
team_name/project_name/**, matches all repositories under a project:
key: value
allow_slugs:
- team_name/project_name/**Allow to be referenced by all repositories
key: value
allow_slugs:
- "**"File Parsing Rules
Supports parsing multiple file formats and converting them to environment variables. File formats include:
- YAML files
Parsed as objects in YAML format, supported file formats: .yaml, .yml.
DOCKER_USER: "username"
DOCKER_TOKEN: "token"
DOCKER_REGISTRY: "https://xxx/xxx"- JSON files
Parsed as objects in JSON format, supported file format: .json.
{
"DOCKER_USER": "username",
"DOCKER_TOKEN": "token",
"DOCKER_REGISTRY": "https://xxx/xxx"
}- Certificate files
Certificate files are parsed as objects with the filename (. replaced with _) as the property name and file content as the property value.
Supported file formats include .crt, .cer, .key, .pem, .pub, .pk, .ppk.
For example:
-----BEGIN CERTIFICATE-----
MIIE...
-----END CERTIFICATE-----main:
push:
- imports: https://cnb.cool/<your-repo-slug>/-/blob/main/server.crt
stages:
- echo "$server_crt" > server.crt
- cat server.crt- Other text files
Except for the above file formats, other text files are parsed as objects in key=value format.
Example:
DB_HOST=localhost
DB_PORT=5432Deep Properties
Most scenarios involve simple single-level property configuration files, such as:
// env.json
{
"token": "private token",
"password": "private password"
}To handle complex configuration files and scenarios, deep properties (first level cannot be an array) will be flattened into a single-level object, with the following rules:
- Property names are preserved, property values are converted to strings
- If property value is an object (including arrays), it will be recursively flattened, with property paths connected by
_
{
"key1": [
"value1",
"value2"
],
"key2": {
"subkey1": [
"value3",
"value4"
],
"subkey2": "value5"
},
"key3": [
"value6",
{
"subsubkey1": "value7"
}
],
"key4": "value8"
}会平铺成
{
// 原属性值转成字符串
"key1": "value1,value2",
// 属性值若为对象,则额外进行递归平铺操作增加属性
"key1_0": "value1",
"key1_1": "value2",
"key2": "[object Object]",
"key2_subkey1": "value3,value4",
"key2_subkey1_0": "value3",
"key2_subkey1_1": "value4",
"key2_subkey2": "value5",
"key3": "value6,[object Object]",
"key3_0": "value6",
"key3_1": "[object Object]",
"key3_1_subsubkey1": "value7",
"key4": "value8"
}main:
push:
- imports:
- ./env.json
stages:
- name: echo
script: echo $key3_1_subsubkey1label
- type:
Object
Specifies labels for the pipeline. The value of each label can be a string or an array of strings. These labels can be used for subsequent pipeline record filtering and other functions.
Here's an example workflow: Merge to main branch deploys to preview environment, tagging deploys to production environment.
main:
push:
- label:
# Regular pipeline for Master branch
type:
- MASTER
- PREVIEW
stages:
- name: install
script: npm install
- name: CCK-lint
script: npm run lint
- name: BVT-build
script: npm run build
- name: UT-test
script: npm run test
- name: pre release
script: ./pre-release.sh
$:
tag_push:
- label:
# Regular pipeline for product release branch
type: RELEASE
stages:
- name: install
script: npm install
- name: build
script: npm run build
- name: DELIVERY-release
script: ./release.shstages
- type:
Array<Stage|Job>
Defines a set of stage tasks, each stage runs serially.
failStages
- type:
Array<Stage|Job>
Defines a set of failure stage tasks. When the normal process fails, these stage tasks will be executed sequentially.
endStages
- type:
Array<Stage|Job>
Defines a set of tasks to execute at the end of the pipeline. When the pipeline stages/failStages finish executing, before the pipeline ends, these stage tasks will be executed sequentially.
When the pipeline prepare stage succeeds, endStages will execute regardless of whether stages succeed. And whether endStages succeed does not affect the pipeline status (i.e., if endStages fail, the pipeline status can still be successful).
ifNewBranch
- type:
Boolean - default:
false
When true, this Pipeline will only execute when the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true).
When both
ifNewBranch/ifModifyexist, if one condition is met, thisPipelinewill execute.
ifModify
- type:
Array<String>|String
Specifies that this Pipeline will only execute when corresponding files change. It is a glob expression string or string array.
Supported Events
- For
pushevents on non-new branches, comparesbeforeandafterto count changed files. - For
commit.addevents, counts changed files in the new commit. - For events triggered by
cnb:applyinpushandcommit.addevent pipelines on non-new branches, changed file counting rules are the same as above. - For events triggered by
PR, counts changed files in thePR. - For events triggered by
cnb:applyfromPRtriggered events, counts changed files in thePR.
Because file changes can be very numerous, changed file counting is limited to a maximum of 300 files.
Outside the above situations, it's not suitable to count file changes, and ifModify checks will be ignored.
Examples
- Example 1:
When the modified file list contains a.js or b.js, this Pipeline will execute.
ifModify:
- a.js
- b.js- Example 2:
When the modified file list contains files with js extension, this Pipeline will execute. Where **/*.js matches all js extension files in all subdirectories, *.js matches all js extension files in the root directory.
ifModify:
- "**/*.js"
- "*.js"- Example 3:
Reverse matching, exclude directory legacy and exclude all Markdown files, trigger when other files change
ifModify:
- "**"
- "!(legacy/**)"
- "!(**/*.md)"
- "!*.md"- Example 4:
Reverse matching, trigger when there are changes in src directory except src/legacy directory
ifModify:
- "src/**"
- "!(src/legacy/**)"breakIfModify
- type:
Boolean - default:
false
Before Job execution, if the source branch has been updated, terminate the build.
retry
- type:
Number - default:
0
Number of retry attempts on failure, 0 means no retry.
Retry intervals are 1s, 2s, 4s, 8s...
allowFailure
- type:
Boolean - default:
false
Whether to allow the current pipeline to fail.
When this parameter is set to true, the pipeline's failure status will not be reported to CNB.
lock
- type:
Object|Boolean
Set a lock for pipeline, the lock is automatically released after pipeline execution, locks cannot be used across repositories.
Behavior: After pipeline A acquires the lock, when pipeline B requests the lock, it can terminate A or wait for A to finish executing and release the lock before acquiring the lock and continuing to execute tasks.
key:
- type:
String
Custom lock name, defaults to
branch-name-pipeline-name, i.e., lock scope defaults to currentpipeline- type:
expires:
- type:
Number - default:
3600(one hour)
Lock expiration time, automatically releases lock after expiration, unit is "seconds".
- type:
timeout:
- type:
Number - default:
3600(one hour)
Timeout duration, used in scenarios waiting for lock, unit is "seconds".
- type:
cancel-in-progress:
- type:
Boolean - default:
false
Whether to terminate pipelines occupying or waiting for the lock, allowing the current pipeline to acquire the lock and execute
- type:
wait:
- type:
Boolean - default:
false
Whether to wait when lock is occupied (does not consume pipeline resources and time), if false then directly report error, cannot be used simultaneously with
cancel-in-progress- type:
cancel-in-wait:
- type:
Boolean - default:
false
Whether to terminate pipelines waiting for the lock, allowing the current pipeline to join the lock waiting queue. Must be used with
waitproperty.- type:
If lock is true, then key, expires, timeout, cancel-in-progress, wait, cancel-in-wait use their respective default values.
Example 1: lock is Boolean format
main:
push:
- lock: true
stages:
- name: stage1
script: echo "stage1"Example 2: lock is Object format
main:
push:
- lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # wait at most 1 minute
stages:
- name: stage1
script: echo "stage1"Example 3: Stop the previous running pipeline under pull_request
main:
pull_request:
- lock:
key: pr
cancel-in-progress: true
stages:
- name: echo hello
script: echo "stage1"Stage
- type:
Job|Object<name: Job>
Stage represents a stage in the pipeline, which can contain one or more Jobs. See Job Introduction for details.
Structure Description
Single Job
When Stage has only one Job, the Stage level can be omitted and Job can be declared directly.
stages:
- name: stage1
jobs:
- name: job A
script: echo helloCan be simplified to:
- stages:
- name: job A
script: echo helloWhen Job is a string, it can be treated as a script task, where both name and script take that string, and can be further simplified to:
- stages:
- echo helloSerial Jobs
When jobs is an array, this group of Jobs will execute serially in order.
# Serial execution
stages:
- name: install
jobs:
- name: job1
script: echo "job1"
- name: job2
script: echo "job2"Parallel Jobs
When jobs is an object, this group of Jobs will execute in parallel.
# Parallel execution
stages:
- name: install
jobs:
job1:
script: echo "job1"
job2:
script: echo "job2"Multiple Jobs can be flexibly organized in serial and parallel. Example of serial first then parallel:
main:
push:
- stages:
- name: serial first
script: echo "serial"
- name: parallel
jobs:
parallel job 1:
script: echo "1"
parallel job 2:
script: echo "2"
- name: serial next
script: echo "serial next"name
- type:
String
Stage name.
ifNewBranch
- type:
Boolean - default:
false
When true, this Stage will only execute when the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true).
When
ifNewBranch/ifModify/ifexist simultaneously, if one condition is met, thisStagewill execute.
ifModify
- type:
Array<String>|String
Specifies that this Stage will only execute when corresponding files change. Uses glob matching expressions.
if
- type:
Array<String>|String
Specifies a shell script, when the script exit code is 0, this Stage will execute.
Example 1: Check variable value
main:
push:
- env:
IS_NEW: true
stages:
- name: is new
if: |
[ "$IS_NEW" = "true" ]
script: echo is new
- name: is not new
if: |
[ "$IS_NEW" != "true" ]
script: echo not newExample 2: Check task output
main:
push:
- stages:
- name: make info
script: echo 'haha'
exports:
info: RESULT
- name: run if RESULT is haha
if: |
[ "$RESULT" = "haha" ]
script: echo $RESULTenv
- type:
Object
Declares environment variables, only effective for the current Stage.
Stage env has higher priority than Pipeline env.
imports
- type:
Array<String>|String
Imports environment variables from files, only effective for the current Stage. Usage same as Pipeline imports.
retry
- type:
Number - default:
0
Number of retry attempts on failure, 0 means no retry.
Retry intervals are 1s, 2s, 4s, 8s...
lock
- type:
Boolean|Object
Sets a lock for Stage, the lock is automatically released after Stage execution.
Behavior: After task A acquires the lock, task B must wait for the lock to be released before continuing execution.
Parameter description:
lock.key:String- Custom lock name, defaults tobranch-name-pipeline-name-stage-indexlock.expires:Number- Lock expiration time (seconds), default 3600lock.wait:Boolean- Whether to wait when lock is occupied, defaultfalselock.timeout:Number- Timeout for waiting for lock (seconds), default 3600
If lock is true, all parameters use default values.
Example 1: lock is Boolean format
main:
push:
- stages:
- name: stage1
lock: true
jobs:
- name: job1
script: echo "job1"Example 2: lock is Object format
main:
push:
- stages:
- name: stage1
lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # wait at most 1 minute
jobs:
- name: job1
script: echo "job1"image
- type:
Object|String
Specifies the environment image for the current Stage. All tasks under the current Stage will execute in this image environment by default.
This property supports environment variable references, see Variable Substitution.
image.name:StringImage name, e.g.,node:20.image.dockerUser:StringSpecifies Docker username for pulling the specified image.image.dockerPassword:StringSpecifies Docker password for pulling the specified image.
If image is specified as a string, it is equivalent to specifying image.name.
If using an image from the Cloud Native Build Docker registry and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
jobs
- type:
Array<Job>|Object<name,Job>
Defines a set of tasks. Array form executes serially, object form executes in parallel.
Job
Job is the smallest task execution unit, which can be script, plugin, or built-in task
Task Types
Script Task
Executes Shell script commands.
Parameter description:
script:Array<String>|String- Shell script to execute, arrays will be connected with&&image:String- Optional, specifies script runtime environment
Example:
- name: install
image: node:20
script: npm installSimplified syntax: Script tasks can be simplified to strings, where both script and name take that string:
- echo helloEquivalent to:
- name: echo hello
script: echo helloPlugin Task
Plugins are Docker images, also called image tasks.
Features:
- Flexible execution environment
- Easy to share and reuse
- Can be used across CI platforms
Working principle: Configures plugin behavior by passing environment variables to ENTRYPOINT.
Note: Custom environment variables set through imports and env are not passed to plugins, but variable substitution can be used in settings and args. CNB system environment variables are passed to plugins.
Parameter description:
name:String- Task nameimage:Object|String- Plugin image, see job.image for detailssettings:Object- Plugin parameters, fill according to plugin documentation, supports$VARor${VAR}for environment variable referencessettingsFrom:Array<String>|String- Specifies local or Git repository file path, loaded as plugin task parameters. See job.settingsFrom for details
Examples
with imports:
- name: npm publish
image: plugins/npm
imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm.json
settings:
username: $NPM_USER
password: $NPM_PASS
email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/**/**"],
"allow_images": ["plugins/npm"]
}with settingsFrom:
- name: npm publish
image: plugins/npm
settingsFrom: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm-settings.json
settings:
# username: $NPM_USER
# password: $NPM_PASS
# email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/cnb"],
"allow_images": ["plugins/npm"]
}Built-in Tasks
Uses built-in functionality provided by CNB.
Parameter description:
type:String- Specifies the built-in task type to executeoptions:Object- Parameters for the built-in taskoptionsFrom:Array<String>|String- Loads parameters from file
Fields in options have higher priority than optionsFrom.
Reference configuration file permission control see Configuration File Reference Authentication.
Example:
name: install
type: INTERNAL_JOB_NAME
optionsFrom: ./options.json
options:
key1: value1
key2: value2// ./options.json
{
"key1": "value1",
"key2": "value2"
}name
- type:
String
Task name.
ifModify
- type:
Array<String>|String
Specifies that this task will only execute when corresponding files change. Usage same as Stage ifModify.
ifNewBranch
- type:
Boolean - default:
false
When true, this task will only execute when the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true). Usage same as Stage ifNewBranch.
if
- type:
Array<String>|String
Specifies a shell script, when the script exit code is 0, this task will execute. Usage same as Stage if.
breakIfModify
- type:
Boolean - default:
false
Before Job execution, if the source branch has been updated, terminate the current task.
skipIfModify
- type:
Boolean - default:
false
Before Job execution, if the source branch has been updated, skip the current task.
env
- type:
Object
Declares environment variables, only effective for the current Job.
Job env has the highest priority.
imports
- type:
Array<String>|String
Imports environment variables from files, only effective for the current Job. Usage same as Stage imports.
exports
- type:
Object
Exports task execution results as environment variables for use by subsequent tasks. Lifecycle is the current Pipeline.
For details see Environment Variables
timeout
- type:
Number|String
Sets timeout and no-output timeout for a single task. Default timeout for a single task is 1 hour, maximum cannot exceed 12 hours, no-output timeout is 10 minutes.
Effective for script tasks and plugin tasks.
Supported time units:
ms: milliseconds (default)s: secondsm: minutesh: hours
Example:
name: timeout job
script: sleep 1d
timeout: 100s # Task will timeout and exit after 100 secondsSee Timeout Strategy for details
allowFailure
- type:
Boolean|String - default:
false
Whether to allow the current task to fail.
When this parameter is set to true, task failure will not affect subsequent process execution and final results.
When value is String type, can read environment variables.
lock
- type:
Object|Boolean
Sets a lock for Job, the lock is automatically released after Job execution.
Behavior: After task A acquires the lock, task B must wait for the lock to be released before continuing execution.
Parameter description:
lock.key:String- Custom lock name, defaults tobranch-name-pipeline-name-stage-index-job-namelock.expires:Number- Lock expiration time (seconds), default 3600lock.wait:Boolean- Whether to wait when lock is occupied, defaultfalselock.timeout:Number- Timeout for waiting for lock (seconds), default 3600
If lock is true, all parameters use default values.
Example 1: Simplified syntax
name: lock
lock: true
script: echo 'job lock'Example 2: Detailed configuration
name: lock
lock:
key: key
expires: 10
wait: true
script: echo 'job lock'retry
- type:
Number - default:
0
Number of retry attempts on failure, 0 means no retry.
Retry intervals are 1s, 2s, 4s, 8s...
type
- type:
String
Specifies the built-in task type to execute.
options
- type:
Object
Parameter configuration for built-in tasks.
optionsFrom
- type:
Array<String>|String
Loads built-in task parameters from file. Similar to imports parameter, when configured as array, later configurations override earlier ones.
Fields in options have higher priority than optionsFrom.
script
- type:
Array<String>|String
Shell script to execute. When array, will automatically concatenate with &&. The script's exit code is used as the task's exit code.
Note: Pipeline uses sh as command line interpreter by default, different image may use different interpreters.
commands
- type:
Array<String>|String
Same function as script parameter, higher priority than script. Mainly for compatibility with Drone CI syntax.
image
- type:
Object|String
Specifies the runtime environment image for the task. Can be used for script tasks or plugin tasks.
This property supports environment variable references, see Variable Substitution.
image.name:StringImage name, e.g.,node:20.image.dockerUser:StringSpecifies Docker username for pulling the specified image.image.dockerPassword:StringSpecifies Docker password for pulling the specified image.
If image is specified as a string, it is equivalent to specifying image.name.
If using an image from the Cloud Native Build Docker registry and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
settings
- type:
Object
Parameter configuration for plugin tasks. Fill according to plugin documentation, supports referencing environment variables via $VAR or ${VAR}.
settingsFrom
- type:
Array<String>|String
Loads plugin task parameters from file. Similar to imports parameter.
Priority:
- When configured as array, later configurations override earlier ones
- Fields in
settingshave higher priority thansettingsFrom
Reference configuration file permission control see Configuration File Reference Authentication.
args
- type:
Array<String>
Parameters passed when executing plugin image, content will be appended to ENTRYPOINT, only supports array.
Example:
- name: npm publish
image: plugins/npm
args:
- lsEquivalent to executing:
docker run plugins/npm lsTask Exit Code Description
After task execution completes, an exit code is returned. Different exit codes have different meanings:
- 0: Task succeeded, continue executing subsequent tasks
- 78: Task succeeded, but interrupt current pipeline execution (can actively execute
exit 78in script to interrupt pipeline) - Other numbers: Task failed, and interrupt current pipeline execution
Configuration Reuse
include
Using the include keyword, you can import YAML files from the current repository or other repositories into the current configuration file. This helps split large configurations, improving maintainability and reusability.
Usage Examples
template.yml (referenced file)
main:
push:
pipeline_2:
env:
ENV_KEY1: xxx
ENV_KEY3: inner
services:
- docker
stages:
- name: echo
script: echo 222.cnb.yml (main configuration file)
include:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/template.yml
main:
push:
pipeline_1:
stages:
- name: echo
script: echo 111
pipeline_2:
env:
ENV_KEY2: xxx # Add new environment variable
ENV_KEY3: outer # Override ENV_KEY3 from template.yml
stages:
- name: echo
script: echo 333 # Add new stepEquivalent configuration after merging:
main:
push:
pipeline_1: # Key doesn't exist, add when merging
stages:
- name: echo
script: echo 111
pipeline_2: # Key exists, merge content
env: # Object merge: same-name keys override, new keys add
ENV_KEY1: xxx # From template.yml
ENV_KEY2: xxx # From .cnb.yml (new)
ENV_KEY3: outer # From .cnb.yml (override)
services: # Array merge: append elements
- docker # From template.yml
stages: # Array merge: append elements
- name: echo # From template.yml
script: echo 222
- name: echo # From .cnb.yml
script: echo 333Syntax Description
include supports three import methods:
include:
# 1. Directly pass configuration file path (string)
- "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/template.yml"
- "template.yml" # Relative path (relative to repository root)
# 2. Pass object for more control
# 2. Pass object for more control
# path: Configuration file path
# ignoreError: Whether to report error when file not found. true-ignore error; false-report error (default)
- path: "template1.yml"
ignoreError: true
# 3. Directly pass inline YAML configuration object
- config:
main:
push:
- stages:
- name: echo
script: echo "hello world"Merge Rules
Configurations from different files are merged according to the following rules:
- Array + Array: Merge all elements (append).
- Object + Object: Merge keys, values of same-name keys will be overridden.
- Array + Object or Object + Array: Final result is only array (object is ignored).
- Merge order: Local
.cnb.ymlconfiguration overrides configurations imported byinclude; in theincludearray, later configurations override earlier ones.
Permission Note: For security reasons and consistent with sensitive information protection principles, include cannot reference files stored in secret repositories, because the complete merged configuration will be displayed on the build details page.
Notes
- Supports nested include, but cannot exceed 50 configuration files.
- Local file paths in include are relative to the project root directory.
- Does not support referencing files in git submodules.
- Does not support using YAML anchors (
&,*) across files.