Grammar Manual
About 6926 wordsAbout 23 min
CNB Pipeline defines automated build processes through the .cnb.yml configuration file.
Basic Structure
CNB Pipeline adopts a hierarchical structure, from outer to inner:
| Level | Description | Example |
|---|---|---|
| Trigger Branch | Specifies which branches trigger the pipeline | main, dev |
| Trigger Event | Specifies what operations trigger the pipeline | push, pull_request |
| Pipeline | A complete build process containing multiple stages | Build process |
| Stage | A step in the pipeline that can contain one or more jobs | Install dependencies |
| Job | The smallest execution unit that executes specific commands or plugins | Execute commands |
Execution Flow Diagram
Trigger Branch (main)
└─ Trigger Event (push)
├─ Pipeline
│ └─ Stage
│ └─ Job
└─ Pipeline
└─ Stage
└─ JobComplete Example
Here is a complete example containing all levels:
main: # Trigger Branch: triggered on main branch
push: # Trigger Event: triggered on code push
- name: pipeline-1 # Pipeline name
stages:
- name: stage-1 # Stage name
jobs:
- name: job-1 # Job name
script: echo "in pipeline-1"
- name: pipeline-2 # Pipeline name
stages:
- name: stage-1 # Stage name
jobs:
- name: job-1 # Job name
script: echo "in pipeline-2"Multiple pipelines can also be declared in object form:
main: # Trigger Branch
push: # Trigger Event
pipeline-1: # Pipeline identifier
stages:
- name: stage-1
jobs:
- name: job-1
script: echo "Hello World"
pipeline-2: # Pipeline identifier
stages:
- name: stage-1
jobs:
- name: job-1
script: echo "Hello World"Trigger Branch
Purpose: Specifies which branches trigger the pipeline.
Type: String
Supported Patterns
| Pattern Type | Description | Example |
|---|---|---|
| Exact Match | Exact branch name match | main, dev, release |
| Wildcard Match | Match using glob syntax | feature/*, hotfix/* |
| Fallback Match | Match all unspecified branches | $ |
Branch Matching Examples
# Only trigger on main branch
main:
push:
- stages:
- echo "main branch"
# Match all branches starting with feature
feature/*:
push:
- stages:
- echo "feature branch"
# Fallback: match all other branches
$:
push:
- stages:
- echo "other branches"For More Details
Refer to Trigger Mechanism#Trigger Branch
Trigger Event
Purpose: Specifies what operations trigger pipeline execution.
Common Events
| Event Name | Trigger Timing |
|---|---|
| push | Triggered when code is pushed to a branch |
| pull_request | Triggered when creating or updating a Pull Request |
| tag_push | Triggered when pushing a tag |
| branch.delete | Triggered when deleting a branch |
Trigger Event Examples
main:
# Execute tests on code push
push:
- stages:
- npm test
# Execute lint on PR
pull_request:
- stages:
- npm run lintFor More Details
Refer to Trigger Mechanism#Trigger Event
Pipeline
Pipeline represents a pipeline containing one or more stages Stage, with each Stage executing sequentially.
Configuration Overview
Pipeline supports the following configuration items:
| Configuration | Type | Description |
|---|---|---|
| name | String | Pipeline name |
| runner | Object | Build node configuration (tags, cpus) |
| docker | Object | Docker environment configuration (image, build, devcontainer, volumes) |
| git | Object | Git repository configuration (enable, submodules, lfs) |
| services | Array | Build services (docker, vscode) |
| env | Object | Environment variables |
| imports | Array<String> | Import environment variables from files |
| label | Object | Pipeline labels |
| stages | Array | Stage task list |
| failStages | Array | Tasks executed on failure |
| endStages | Array | Tasks executed at the end |
| ifNewBranch | Boolean | Execute only on new branches |
| ifModify | Array<String> | Execute when files change |
| breakIfModify | Boolean | Terminate when source branch updates |
| retry | Number | Failure retry count |
| allowFailure | Boolean | Allow failure |
| lock | Object | Pipeline lock configuration |
| sandbox | Boolean | Sandbox mode |
Detailed Configuration
name
- Type:
String
Specifies the pipeline name, default is pipeline. When there are multiple parallel pipelines, default pipeline names are pipeline, pipeline-1, pipeline-2, etc. You can define name to specify a pipeline name to distinguish different pipelines.
runner
- Type:
Object
Specifies build node related parameters.
Sub-parameters:
| Parameter | Type | Description |
|---|---|---|
| tags | String | Array<String> | Specifies which tagged build nodes to use |
| cpus | Number | Specifies the number of CPU cores required for the build |
runner.tags
- Type:
String|Array<String> - Default:
cnb:arch:default
Specifies which tagged build nodes to use. See Build Node for details.
Example:
main:
push:
- runner:
tags: cnb:arch:amd64
stages:
- name: uname
script: uname -arunner.cpus
- Type:
Number
Specifies the maximum number of CPU cores required for the build (memory = CPU cores × 2G), where CPU and memory do not exceed the actual size of the runner machine.
If not configured, the maximum available CPU cores are determined by the allocated runner machine configuration.
Example:
# cpus = 1, memory = 2G
main:
push:
- runner:
cpus: 1
stages:
- name: echo
script: echo "hello world"docker
- Type:
Object
Specifies Docker-related parameters. See Build Environment for details.
Sub-parameters:
| Parameter | Description |
|---|---|
| image | Environment image for the current Pipeline; all tasks under this Pipeline will execute in this image environment |
| build | Specifies a Dockerfile to build a temporary image used as the image value |
| devcontainer | Specifies the path to the devcontainer.json file; uses the devcontainer.json content as the pipeline container image |
| volumes | Declares data volumes for caching scenarios |
Priority
When image, build, and devcontainer are specified simultaneously, the priority is build > devcontainer > image.
docker.image
- Type:
Object|String
Specifies the environment image for the current Pipeline; all tasks under this Pipeline will execute in this image environment.
This property and its sub-properties support referencing environment variables. See Variable Substitution.
Properties:
| Property | Type | Description |
|---|---|---|
| image.name | String | Image name, such as node:20 |
| image.dockerUser | String | Specifies Docker username for pulling the image |
| image.dockerPassword | String | Specifies Docker password for pulling the image |
If image is specified as a string, it is equivalent to specifying image.name.
If using images from the Cloud Native Build Docker artifact repository and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
Example 1: Using DockerHub Public Image
main:
push:
- docker:
# Use node:20 image from Docker official repository as build container
image: node:20
stages:
- name: show version
script: node -vExample 2: Using CNB Public or Private Images
main:
push:
- docker:
# Use non-public image as build container, need to pass docker username and password
image:
name: docker.cnb.cool/images/pipeline-env:1.0
# When using CNB images, CNB_TOKEN is injected as dockerPassword by default during build
stages:
- name: echo
script: echo "hello world"Example 3: Using Other Private Images
Declare the Docker username and password required for pulling other private images in a secret repository file.
DOCKER_USER: <user>
DOCKER_PASSWORD: <password>main:
push:
# Import the secret repository file created above as environment variables
- imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/docker.yml
docker:
image:
name: other-private-images/pipeline-env:1.0
# Reference environment variables declared in secret repository file docker.yml
dockerUser: $DOCKER_USER
dockerPassword: $DOCKER_PASSWORD
stages:
- name: echo
script: echo "hello world"docker.build
- Type:
Object|String
Specifies a Dockerfile to build a temporary image used as the image value.
This property and its sub-properties support referencing environment variables. See Variable Substitution.
For a complete example of using build to declare the build environment, refer to docker-build-with-by.
Parameter Description:
| Parameter | Type | Description |
|---|---|---|
| build.dockerfile | String | Dockerfile path, supports referencing environment variables |
| build.target | String | Corresponds to the --target parameter in docker build, allowing selective building of specific stages in the Dockerfile |
| build.by | Array<String> | String | Declares a list of files that the build process depends on for caching |
| build.versionBy | Array<String> | String | Used for version control, supports directly passing folder paths |
| build.buildArgs | Object | Insert additional build arguments during build |
| build.ignoreBuildArgsInVersion | Boolean | Whether to ignore buildArgs in version calculation |
| build.sync | String | Whether to wait for docker push to succeed before continuing, default is false |
Note
Files not appearing in the by list, except for the Dockerfile, are treated as non-existent during the image build process.
If build is specified as a string, it is equivalent to specifying build.dockerfile.
Dockerfile Usage:
main:
push:
- docker:
# `build` as string is equivalent to specifying `build.dockerfile`
build: ./image/Dockerfile
stages:
- stage1
- stage2main:
push:
- docker:
# Specifying `build` as `Object` type allows more control over the image build process
build:
dockerfile: ./image/Dockerfile
# Only build builder, not the entire Dockerfile
target: builder
stages:
- stage1
- stage2Dockerfile versionBy Usage:
Example: Cache pnpm in the environment image to speed up subsequent pnpm i processes
FROM node:22
RUN npm config set registry https://mirrors.cloud.tencent.com/npm/ \
&& npm i -g pnpm
WORKDIR /data/cache
COPY package.json package-lock.json ./
RUN pnpm imain:
push:
# Specify build environment through Dockerfile
- docker:
build:
dockerfile: ./Dockerfile
by:
- package.json
- package-lock.json
versionBy:
- package-lock.json
stages:
- name: cp node_modules
# Copy node_modules from container to pipeline working directory
script: cp -r /data/cache/node_modules ./
- name: check node_modules
script: |
if [ -d "node_modules" ]; then
cd node_modules
ls
else
echo "node_modules directory does not exist."
fidocker.devcontainer
- Type:
String
Specifies the path to the devcontainer.json file; uses the devcontainer.json content as the pipeline container image.
Only supports paths relative to the current repository, such as: .devcontainer/devcontainer.json
For the devcontainer.json specification, see: devcontainer.json
Limited Support
Due to CNB platform characteristics, currently only limited support is provided. Currently supported properties:
- image
- build.dockerfile
- build.context
- build.args
- build.target
docker.volumes
- Type:
Array<String>|String
Declares data volumes; multiple volumes can be passed as an array or separated by ,, and can reference environment variables.
Note
This cache is only valid on the current build node and does not support cross-node caching. For details, see Pipeline Cache
Supported Formats:
<group>:<path>:<type><path>:<type><path>
Parameter Meanings:
| Parameter | Description |
|---|---|
| group | Optional, data volume group; different groups are isolated from each other |
| path | Required, absolute mount path for the data volume; supports absolute paths (starting with /) or relative paths (starting with ./), relative to the workspace |
| type | Optional, data volume type; default is copy-on-write |
Data Volume Types:
| Type | Abbreviation | Description | Use Case |
|---|---|---|---|
| read-write | rw | Read-write; concurrent write conflicts need to be handled manually | Serial build scenarios |
| read-only | ro | Read-only; write operations throw exceptions | Read-only access |
| copy-on-write | cow | Read-write; changes are merged after pipeline success | Concurrent build scenarios (default) |
| copy-on-write-read-only | - | Read-only; changes are discarded after pipeline ends | PR scenarios |
| data | - | Create temporary data volume; automatically cleaned up when pipeline ends | Shared data |
copy-on-write
Used for caching scenarios; supports concurrency.
copy-on-write technology allows the system to share identical data copies before modification is needed, enabling efficient cache replication. In concurrent environments, this approach avoids cache read-write conflicts because only when actual data modification is needed will a private copy of the data be created. This way, only write operations cause data copying, while read operations can safely proceed in parallel without data consistency concerns. This mechanism significantly improves performance, especially in read-heavy, write-light caching scenarios.
data
Used for sharing data; shares a specified directory from the container for use by other containers.
By creating a data volume and mounting it to various containers. Unlike directly mounting a directory from the build node into the container: when the specified directory already exists in the container, the container contents will be automatically copied to the data volume first, rather than directly overwriting the container directory with the data volume contents.
volumes Examples
Example 1: Mounting a directory from the build node into the container for local caching
main:
push:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use cache with updates
- /root/.npm
# Use main cache with updates
- main:/root/.gradle:copy-on-write
stages:
- stage1
- stage2
pull_request:
- docker:
image: node:20
# Declare data volumes
volumes:
- /data/config:read-only
- /data/mydata:read-write
# Use copy-on-write cache
- /root/.npm
- node_modules
# PR uses main cache but does not update
- main:/root/.gradle:copy-on-write-read-only
stages:
- stage1
- stage2Example 2: Sharing files packaged in a container to other containers
main:
push:
- docker:
image: go-app-cli # Assume there's a go app at /go-app/cli in the image
# Declare data volumes
volumes:
# This path exists in go-app-cli image, so when executing the environment image, this path's contents will be copied to a temporary data volume, which can be shared with other task containers
- /go-app
stages:
- name: show /go-app-cli in job container
image: alpine
script: ls /go-appgit
- Type:
Object
Provides Git repository related configuration.
git.enable
- Type:
Boolean - Default:
true(falseforbranch.deleteevent)
Specifies whether to pull code.
git.submodules
- Type:
Object|Boolean - Default:
- enable:
true - remote:
false
- enable:
Specifies whether to pull submodules.
When the value is of Boolean type, it is equivalent to specifying git.submodules.enable as the value of git.submodules, and git.submodules.remote as the default value false.
git.submodules.enable
- Type:
Boolean - Default:
true
Specifies whether to pull submodules.
git.submodules.remote
- Type:
Boolean - Default:
false
Whether to add the --remote parameter when executing git submodule update, used to pull the latest code from the submodule each time.
Basic Usage:
main:
push:
- git:
enable: true
submodules: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
submodules:
enable: true
remote: true
stages:
- name: echo
script: echo "hello world"git.lfs
- Type:
Object|Boolean - Default:
true
Specifies whether to pull LFS files.
Supports Object form to specify specific parameters; when fields are omitted, defaults are:
{
"enable": true
}Basic Usage:
main:
push:
- git:
enable: true
lfs: true
stages:
- name: echo
script: echo "hello world"
- git:
enable: true
lfs:
enable: true
stages:
- name: echo
script: echo "hello world"git.lfs.enable
Specifies whether to pull LFS files.
services
- Type:
Array<String>|Array<Object>
Used to declare services needed during the build, format: name:[version], version is optional.
Currently Supported Services:
- docker
- vscode
service:docker
Used to enable the dind service.
Declare when docker build, docker login, etc. operations are needed during the build; automatically injects docker daemon and docker cli into the environment.
Example:
main:
push:
- services:
- docker
docker:
image: alpine
stages:
- name: docker info
script:
- docker info
- docker psThis service will automatically docker login to the CNB Docker artifact repository mirror source (docker.cnb.cool), allowing direct docker push to the current repository Docker artifact repository in subsequent tasks.
Example:
main:
push:
- services:
- docker
stages:
- name: build and push
script: |
# Dockerfile exists in root directory
docker build -t ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest .
docker push ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latestIf you need to use buildx to build multi-architecture/platform images, you can enable the rootlessBuildkitd feature:
main:
push:
- docker:
image: golang:1.24
services:
- name: docker
options:
rootlessBuildkitd:
enabled: true
env:
IMAGE_TAG: ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
stages:
- name: go build
script: ./build.sh
- name: docker login
script: docker login -u ${CNB_TOKEN_USER_NAME} -p "${CNB_TOKEN}" ${CNB_DOCKER_REGISTRY}
- name: docker build and push
script: docker buildx build -t ${IMAGE_TAG} --platform linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/arm64,linux/riscv64,linux/ppc64le,linux/s390x,linux/386,linux/mips64le,linux/mips64,linux/loong64,linux/arm/v7,linux/arm/v6 --push .service:vscode
Declare when Workspaces is needed.
Example:
$:
vscode:
- services:
- vscode
- docker
docker:
image: alpine
stages:
- name: uname
script: uname -aTo enable preview-only mode, refer to Documentation.
To specify the development environment offline keep-alive time (the development environment will be reclaimed after the specified duration), use the keepAliveTimeout parameter:
$:
vscode:
- docker:
build: .ide/Dockerfile
services:
- docker
- name: vscode
options:
# Keep-alive time in milliseconds; if not set, defaults to 10 minutes without heartbeat (no http/ssh connections detected in the development environment) before closing the environment
keepAliveTimeout: 10m
# Tasks executed after the development environment starts
stages:
- name: ls
script: ls -alkeepAliveTimeout Parameter Description:
| Property | Value |
|---|---|
| Type | Number | String (default unit is milliseconds) |
| Default | 600000 (ms, 10 minutes) |
| Description | Development environment offline keep-alive time |
Supported time units:
| Unit | Description |
|---|---|
| ms | Milliseconds (default) |
| s | Seconds |
| m | Minutes |
| h | Hours |
env
- Type:
Object
Declares an object as environment variables: property names are environment variable names, property values are environment variable values.
Used during task execution; valid for all non-plugin tasks within the current Pipeline.
Example:
main:
push:
- services:
- docker
env:
some_key: some_value
stages:
- name: some job
script: echo "$some_value"imports
- Type:
Array<String>|String
Specifies CNB repository file paths (file relative paths or page URLs) as the source of environment variables; functions the same as env.
File content is parsed as an object, with property names as environment variable names and property values as environment variable values.
Used during task execution; valid for all non-plugin tasks within the current Pipeline.
Local relative paths like ./env.yml will be concatenated into file page URLs for loading. That is: files that exist locally but not remotely will not be referenced.
Security
Cloud Native Build now supports Secret Repository, which is more secure and supports file reference auditing. Generally, a secret repository is used to store credentials such as npm, docker, etc.
Same Name Key Priority
- When configuring imports as an array, if there are duplicate parameters, later configurations will override earlier ones
- If there are duplicates with
envparameters,envparameters will override those in theimportsfile
Variable Substitution
imports file paths can read environment variables. If it's an array, file paths below can read variables from files above.
FILE: "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/env2.yml"TEST_TOKEN: some tokenmain:
push:
- imports:
- ./env1.yml
# FILE is an environment variable declared in env1.yml
- $FILE
stages:
- name: echo
# TEST_TOKEN is an environment variable declared in env2.yml
script: echo $TEST_TOKENReference Authorization
Referenced files can declare accessible scopes. See Configuration File Reference Authorization.
Example:
Match all repositories under a project:
key: value
allow_slugs:
- team_name/project_name/**Allow to be referenced by all repositories:
key: value
allow_slugs:
- "**"File Parsing Rules
Supports parsing multiple file formats and converting them to environment variables.
YAML Files
Parsed as YAML format objects; supported file formats are: .yaml, .yml.
DOCKER_USER: "username"
DOCKER_TOKEN: "token"
DOCKER_REGISTRY: "https://xxx/xxx"JSON Files
Parsed as JSON format objects; supported file formats are: .json.
{
"DOCKER_USER": "username",
"DOCKER_TOKEN": "token",
"DOCKER_REGISTRY": "https://xxx/xxx"
}Certificate Files
Certificate files are parsed as objects with the filename (. replaced with _) as the property name and the file content as the property value.
Supported file formats include: .crt, .cer, .key, .pem, .pub, .pk, .ppk.
-----BEGIN CERTIFICATE-----
MIIE...
-----END CERTIFICATE-----main:
push:
- imports: https://cnb.cool/<your-repo-slug>/-/blob/main/server.crt
stages:
- echo "$server_crt" > server.crt
- cat server.crtOther Text Files
Besides the above file formats, other text files are parsed as objects in key=value format.
DB_HOST=localhost
DB_PORT=5432Deep Properties
In most scenarios, configuration files are simple single-level properties, such as:
// env.json
{
"token": "private token",
"password": "private password"
}To handle complex configuration files and scenarios, deep properties (first level cannot be an array) are flattened into single-level objects according to the following rules:
- Property names are retained; property values are converted to strings
- If property values are objects (including arrays), they are recursively flattened, with property paths connected by
_
{
"key1": [
"value1",
"value2"
],
"key2": {
"subkey1": [
"value3",
"value4"
],
"subkey2": "value5"
},
"key3": [
"value6",
{
"subsubkey1": "value7"
}
],
"key4": "value8"
}Will be flattened to:
{
// Original property values converted to strings
"key1": "value1,value2",
// If property values are objects, recursively flatten to add additional properties
"key1_0": "value1",
"key1_1": "value2",
"key2": "[object Object]",
"key2_subkey1": "value3,value4",
"key2_subkey1_0": "value3",
"key2_subkey1_1": "value4",
"key2_subkey2": "value5",
"key3": "value6,[object Object]",
"key3_0": "value6",
"key3_1": "[object Object]",
"key3_1_subsubkey1": "value7",
"key4": "value8"
}main:
push:
- imports:
- ./env.json
stages:
- name: echo
script: echo $key3_1_subsubkey1label
- Type:
Object
Assigns labels to the pipeline. Each label value can be a string or an array of strings. These labels can be used for subsequent pipeline record filtering and other functions.
Here's an example of a workflow: main branch merge triggers pre-release environment, tag triggers production environment.
main:
push:
- label:
# Regular pipeline for Master branch
type:
- MASTER
- PREVIEW
stages:
- name: install
script: npm install
- name: CCK-lint
script: npm run lint
- name: BVT-build
script: npm run build
- name: UT-test
script: npm run test
- name: pre release
script: ./pre-release.sh
$:
tag_push:
- label:
# Regular pipeline for product release branch
type: RELEASE
stages:
- name: install
script: npm install
- name: build
script: npm run build
- name: DELIVERY-release
script: ./release.shstages
- Type:
Array<Stage|Job>
Defines a set of stage tasks; each stage runs serially.
failStages
- Type:
Array<Stage|Job>
Defines a set of failure stage tasks. When the normal process fails, these tasks will be executed sequentially.
endStages
- Type:
Array<Stage|Job>
Defines a set of tasks to be executed at the end of the pipeline. After the pipeline stages/failStages complete, before the pipeline ends, these tasks will be executed sequentially.
When the pipeline prepare phase succeeds, endStages will be executed regardless of whether stages succeed. Moreover, endStages success or failure does not affect the pipeline status (i.e., endStages may fail while the pipeline status is success).
ifNewBranch
- Type:
Boolean - Default:
false
When true, this Pipeline will only be executed when the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true).
Condition Combination
When ifNewBranch / ifModify exist simultaneously, if any condition is met, this Pipeline will be executed.
ifModify
- Type:
Array<String>|String
Specifies that this Pipeline will only be executed when corresponding files change. Is a glob expression string or array of strings.
Supported Events
- Non-new branch
pushevents, comparingbeforeandafterto count changed files commit.addevents, counting changed files in new commits- Non-new branch
push,commit.addevent pipelines triggered bycnb:applyevents follow the same change file counting rules PRtriggered events, counting changed files in thePRPRtriggered events throughcnb:applyevents, counting changed files in thePR
Limitation
Because there may be many file changes, changed file counting is limited to a maximum of 300 files. In cases other than the above, it is not suitable to count file changes, and ifModify checks will be ignored.
Examples
Example 1: When the modified file list contains a.js or b.js, this Pipeline will be executed.
ifModify:
- a.js
- b.jsExample 2: When the modified file list contains files with the js extension, this Pipeline will be executed.
ifModify:
- "**/*.js"
- "*.js"Example 3: Negative matching, excluding the legacy directory and all Markdown files; triggered when other files change.
ifModify:
- "**"
- "!(legacy/**)"
- "!(**/*.md)"
- "!*.md"Example 4: Negative matching, triggered when there are changes in the src directory except for the src/legacy directory.
ifModify:
- "src/**"
- "!(src/legacy/**)"breakIfModify
- Type:
Boolean - Default:
false
Before Job execution, if the source branch has been updated, terminate the build.
retry
- Type:
Number - Default:
0
Failure retry count; 0 means no retries.
Retry intervals are 1s, 2s, 4s, 8s...
allowFailure
- Type:
Boolean - Default:
false
Whether to allow the current pipeline to fail.
When this parameter is set to true, the pipeline's failure status will not be reported to CNB.
lock
- Type:
Object|Boolean
Sets a lock for the pipeline; the lock is automatically released after the pipeline completes. Locks cannot be used across repositories.
Behavior: After Pipeline A acquires the lock, when Pipeline B requests the lock, it can either terminate A or wait for A to complete and release the lock before acquiring the lock and continuing execution.
Parameter Description:
| Parameter | Type | Default | Description |
|---|---|---|---|
| key | String | Branch name-Pipeline name | Custom lock name |
| expires | Number | 3600 (1 hour) | Lock expiration time (seconds) |
| timeout | Number | 3600 (1 hour) | Timeout (seconds) |
| cancel-in-progress | Boolean | false | Whether to terminate the pipeline occupying or waiting for the lock |
| wait | Boolean | false | Whether to wait when the lock is occupied |
| cancel-in-wait | Boolean | false | Whether to terminate pipelines waiting for the lock |
If lock is true, all parameters use default values.
Example 1: lock is Boolean format
main:
push:
- lock: true
stages:
- name: stage1
script: echo "stage1"Example 2: lock is Object format
main:
push:
- lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Wait up to 1 minute
stages:
- name: stage1
script: echo "stage1"Example 3: Stop the previous in-progress pipeline under pull_request
main:
pull_request:
- lock:
key: pr
cancel-in-progress: true
stages:
- name: echo hello
script: echo "stage1"sandbox
- Type:
Boolean
Whether to enable sandbox mode.
When set to true, the following environment variables in the pipeline become invalid:
Stage
- Type:
Job|Object<name: Job>
Stage represents a stage in the pipeline that can contain one or more Jobs. See Job Introduction for details.
Configuration Overview
Stage supports the following configuration items:
| Configuration | Type | Description |
|---|---|---|
| name | String | Stage name |
| ifNewBranch | Boolean | Execute only on new branches |
| ifModify | Array<String> | Execute when files change |
| if | String | Conditional execution script |
| env | Object | Environment variables |
| imports | Array<String> | Import environment variables from files |
| retry | Number | Failure retry count |
| lock | Object | Stage lock configuration |
| image | Object | Environment image |
| jobs | Array | Task list |
Structure Description
Single Job
When Stage has only one Job, the Stage level can be omitted and the Job can be declared directly.
stages:
- name: stage1
jobs:
- name: job A
script: echo helloCan be simplified to:
- stages:
- name: job A
script: echo helloWhen Job is a string, it can be treated as a script task, with name and script both taking the string value, and can be further simplified to:
- stages:
- echo helloSerial Jobs
When jobs is an array, this set of Jobs will execute sequentially.
# Serial execution
stages:
- name: install
jobs:
- name: job1
script: echo "job1"
- name: job2
script: echo "job2"Parallel Jobs
When jobs is an object, this set of Jobs will execute in parallel.
# Parallel execution
stages:
- name: install
jobs:
job1:
script: echo "job1"
job2:
script: echo "job2"Multiple Jobs can be flexibly organized in serial and parallel. Example of serial first then parallel:
main:
push:
- stages:
- name: serial first
script: echo "serial"
- name: parallel
jobs:
parallel job 1:
script: echo "1"
parallel job 2:
script: echo "2"
- name: serial next
script: echo "serial next"Configuration Description
name
- Type:
String
Stage name.
ifNewBranch
- Type:
Boolean - Default:
false
When true, this Stage will only be executed when the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true).
Condition Combination
When ifNewBranch / ifModify / if exist simultaneously, if any condition is met, this Stage will be executed.
ifModify
- Type:
Array<String>|String
Specifies that this Stage will only be executed when corresponding files change. Uses glob matching expressions.
if
- Type:
Array<String>|String
Specifies a shell script; when the script exit code is 0, this Stage is executed.
Example 1: Judging variable value
main:
push:
- env:
IS_NEW: true
stages:
- name: is new
if: |
[ "$IS_NEW" = "true" ]
script: echo is new
- name: is not new
if: |
[ "$IS_NEW" != "true" ]
script: echo not newExample 2: Judging task output
main:
push:
- stages:
- name: make info
script: echo 'haha'
exports:
info: RESULT
- name: run if RESULT is haha
if: |
[ "$RESULT" = "haha" ]
script: echo $RESULTenv
- Type:
Object
Declares environment variables; only valid for the current Stage.
Stage env has higher priority than Pipeline env.
imports
- Type:
Array<String>|String
Imports environment variables from files; only valid for the current Stage. Same usage as Pipeline imports.
retry
- Type:
Number - Default:
0
Failure retry count; 0 means no retries.
Retry intervals are 1s, 2s, 4s, 8s...
lock
- Type:
Boolean|Object
Sets a lock for the Stage; the lock is automatically released after the Stage completes.
Behavior: After Task A acquires the lock, Task B needs to wait for the lock to be released before continuing execution.
Parameter Description:
| Parameter | Type | Description |
|---|---|---|
| lock.key | String | Custom lock name; default is branch name-pipeline name-stage index |
| lock.expires | Number | Lock expiration time (seconds); default 3600 |
| lock.wait | Boolean | Whether to wait when the lock is occupied; default false |
| lock.timeout | Number | Lock wait timeout (seconds); default 3600 |
If lock is true, all parameters use default values.
Example 1: lock is Boolean format
main:
push:
- stages:
- name: stage1
lock: true
jobs:
- name: job1
script: echo "job1"Example 2: lock is Object format
main:
push:
- stages:
- name: stage1
lock:
key: key
expires: 600 # 10 minutes
wait: true
timeout: 60 # Wait up to 1 minute
jobs:
- name: job1
script: echo "job1"Stage.image
- Type:
Object|String
Specifies the environment image for the current Stage; all tasks under this Stage will execute in this image environment by default.
This property supports referencing environment variables. See Variable Substitution.
Properties:
| Property | Type | Description |
|---|---|---|
| image.name | String | Image name, such as node:20 |
| image.dockerUser | String | Specifies Docker username for pulling the image |
| image.dockerPassword | String | Specifies Docker password for pulling the image |
If image is specified as a string, it is equivalent to specifying image.name.
If using images from the Cloud Native Build Docker artifact repository and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
jobs
- Type:
Array<Job>|Object<name,Job>
Defines a set of tasks. Array format is serial execution; object format is parallel execution.
Job
Job (task) is the smallest execution unit, mainly divided into three types: script tasks, plugin tasks, and builtin tasks. The first two both support specifying the image parameter to define the runtime environment, which can easily confuse newcomers. Refer to Script Task vs Plugin Task to understand the difference between the two.
Configuration Overview
Job supports the following configuration items:
| Configuration | Type | Description |
|---|---|---|
| name | String | Task name |
| ifModify | Array<String> | Execute when files change |
| ifNewBranch | Boolean | Execute only on new branches |
| if | String | Conditional execution script |
| breakIfModify | Boolean | Terminate when source branch updates |
| skipIfModify | Boolean | Skip when source branch updates |
| env | Object | Environment variables |
| imports | Array<String> | Import environment variables from files |
| exports | Object | Export environment variables |
| timeout | Number | Timeout duration |
| allowFailure | Boolean | Allow failure |
| lock | Object | Task lock configuration |
| retry | Number | Failure retry count |
| type | String | Builtin task type |
| options | Object | Builtin task parameters |
| optionsFrom | Array<String> | Load parameters from files |
| script | String | Shell script |
| commands | String | Shell script (Drone CI compatible) |
| image | Object | Runtime environment image |
| settings | Object | Plugin parameters |
| settingsFrom | Array<String> | Load plugin parameters from files |
| args | Array<String> | Plugin parameters |
Task Types
Script Task
Executes Shell script commands.
Parameter Description:
| Parameter | Type | Description |
|---|---|---|
| script | Array<String> | String | Shell script to execute; arrays are connected with && |
| image | String | Optional; specifies script runtime environment |
Example:
- name: install
image: node:20
script: npm installSimplified Syntax: Script tasks can be simplified to strings, where both script and name take the string value:
- echo helloEquivalent to:
- name: echo hello
script: echo helloPlugin Task
Plugins are Docker images, also known as image tasks.
Features:
- Flexible execution environment
- Easy to share and reuse
- Can be used across CI platforms
Working Principle: Configures plugin behavior by passing environment variables to ENTRYPOINT.
Note
Custom environment variables set through imports, env will not be passed to plugins, but can be used for variable substitution in settings, args. CNB system environment variables will be passed to plugins.
Parameter Description:
| Parameter | Type | Description |
|---|---|---|
| name | String | Task name |
| image | Object | String | Plugin image; see job.image for details |
| settings | Object | Plugin parameters; fill according to plugin documentation; supports $VAR or ${VAR} to reference environment variables |
| settingsFrom | Array<String> | String | Specifies local or Git repository file paths to load as plugin task parameters |
Example
with imports:
- name: npm publish
image: plugins/npm
imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm.json
settings:
username: $NPM_USER
password: $NPM_PASS
email: $NPM_EMAIL
registry: https://mirrors.xxx.com/npm/
folder: ./{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/**/**"],
"allow_images": ["plugins/npm"]
}with settingsFrom:
- name: npm publish
image: plugins/npm
settingsFrom: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/npm-settings.json
settings:
registry: https://mirrors.xxx.com/npm/
folder: ./{
"username": "xxx",
"password": "xxx",
"email": "xxx@emai.com",
"allow_slugs": ["cnb/cnb"],
"allow_images": ["plugins/npm"]
}Builtin Task
Uses CNB-provided builtin functions.
Parameter Description:
| Parameter | Type | Description |
|---|---|---|
| type | String | Specifies the builtin task type to execute |
| options | Object | Builtin task parameters |
| optionsFrom | Array<String> | String | Load parameters from files |
Fields in options have higher priority than optionsFrom.
Reference configuration file permission control. See Configuration File Reference Authorization.
Example:
name: install
type: INTERNAL_JOB_NAME
optionsFrom: ./options.json
options:
key1: value1
key2: value2// ./options.json
{
"key1": "value1",
"key2": "value2"
}Configuration Description
name
- Type:
String
Task name.
ifModify
- Type:
Array<String>|String
Specifies that this task will only be executed when corresponding files change. Same usage as Stage ifModify.
ifNewBranch
- Type:
Boolean - Default:
false
When true, this task will only be executed when the current branch is a new branch (i.e., CNB_IS_NEW_BRANCH is true). Same usage as Stage ifNewBranch.
if
- Type:
Array<String>|String
Specifies a shell script; when the script exit code is 0, this task is executed. Same usage as Stage if.
breakIfModify
- Type:
Boolean - Default:
false
Before Job execution, if the source branch has been updated, terminate the current task.
skipIfModify
- Type:
Boolean - Default:
false
Before Job execution, if the source branch has been updated, skip the current task.
env
- Type:
Object
Declares environment variables; only valid for the current Job.
Job env has the highest priority.
imports
- Type:
Array<String>|String
Imports environment variables from files; only valid for the current Job. Same usage as Stage imports.
exports
- Type:
Object
Exports task execution results as environment variables for use by subsequent tasks. Lifecycle is the current Pipeline.
See Environment Variables for details.
timeout
- Type:
Number|String
Sets the timeout duration and no-output timeout duration for a single task. Default timeout for a single task is 1 hour, maximum cannot exceed 12 hours; no-output timeout is 10 minutes.
Valid for script tasks and plugin tasks.
Supported Time Units:
| Unit | Description |
|---|---|
| ms | Milliseconds (default) |
| s | Seconds |
| m | Minutes |
| h | Hours |
Example:
name: timeout job
script: sleep 1d
timeout: 100s # Task will timeout and exit after 100 secondsSee Timeout Policy for details.
allowFailure
- Type:
Boolean|String - Default:
false
Whether to allow the current task to fail.
When this parameter is set to true, task failure will not affect subsequent process execution and final results.
When the value is of String type, it can read environment variables.
lock
- Type:
Object|Boolean
Sets a lock for the Job; the lock is automatically released after the Job completes.
Behavior: After Task A acquires the lock, Task B needs to wait for the lock to be released before continuing execution.
Parameter Description:
| Parameter | Type | Description |
|---|---|---|
| lock.key | String | Custom lock name; default is branch name-pipeline name-stage index-job name |
| lock.expires | Number | Lock expiration time (seconds); default 3600 |
| lock.wait | Boolean | Whether to wait when the lock is occupied; default false |
| lock.timeout | Number | Lock wait timeout (seconds); default 3600 |
If lock is true, all parameters use default values.
Example 1: Simplified syntax
name: lock
lock: true
script: echo 'job lock'Example 2: Detailed configuration
name: lock
lock:
key: key
expires: 10
wait: true
script: echo 'job lock'retry
- Type:
Number - Default:
0
Failure retry count; 0 means no retries.
Retry intervals are 1s, 2s, 4s, 8s...
type
- Type:
String
Specifies the builtin task type to execute.
options
- Type:
Object
Builtin task parameter configuration.
optionsFrom
- Type:
Array<String>|String
Loads builtin task parameters from files. Similar to the imports parameter; when configured as an array, later configurations override earlier ones.
Fields in options have higher priority than optionsFrom.
script
- Type:
Array<String>|String
Shell script to execute. When an array, automatically concatenated with &&. Script exit code is used as the task exit code.
Note
The pipeline uses sh as the default command interpreter by default; different images may use different interpreters.
commands
- Type:
Array<String>|String
Same function as the script parameter; higher priority than script. Mainly for Drone CI syntax compatibility.
Job.image
- Type:
Object|String
Specifies the runtime environment image for the task. Can be used for script tasks or plugin tasks.
This property supports referencing environment variables. See Variable Substitution.
Properties:
| Property | Type | Description |
|---|---|---|
| image.name | String | Image name, such as node:20 |
| image.dockerUser | String | Specifies Docker username for pulling the image |
| image.dockerPassword | String | Specifies Docker password for pulling the image |
If image is specified as a string, it is equivalent to specifying image.name.
If using images from the Cloud Native Build Docker artifact repository and image.dockerPassword is not set, this parameter will be set to the value of the environment variable CNB_TOKEN.
settings
- Type:
Object
Plugin task parameter configuration. Fill according to plugin documentation; supports $VAR or ${VAR} to reference environment variables.
settingsFrom
- Type:
Array<String>|String
Loads plugin task parameters from files. Similar to the imports parameter.
Priority:
- When configured as an array, later configurations override earlier ones
- Fields in
settingshave higher priority thansettingsFrom
Reference configuration file permission control. See Configuration File Reference Authorization.
args
- Type:
Array<String>
Parameters passed when executing the plugin image; content is appended to ENTRYPOINT; only supports arrays.
Example:
- name: npm publish
image: plugins/npm
args:
- lsEquivalent to executing:
docker run plugins/npm lsTask Exit Code Description
After task execution completes, an exit code is returned; different exit codes have different meanings:
| Exit Code | Description |
|---|---|
| 0 | Task succeeded; continue executing subsequent tasks |
| 78 | Task succeeded but interrupts current pipeline execution (can actively execute exit 78 in scripts to interrupt the pipeline) |
| Other numbers | Task failed; also interrupts current pipeline execution |
Configuration Reuse
include
Using the include keyword, you can import YAML files from the current repository or other repositories into the current configuration file. This helps split large configurations for better maintainability and reusability.
Usage Examples
template.yml (referenced file):
main:
push:
pipeline_2:
env:
ENV_KEY1: xxx
ENV_KEY3: inner
services:
- docker
stages:
- name: echo
script: echo 222.cnb.yml (main configuration file):
include:
- https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/template.yml
main:
push:
pipeline_1:
stages:
- name: echo
script: echo 111
pipeline_2:
env:
ENV_KEY2: xxx # Add environment variable
ENV_KEY3: outer # Override ENV_KEY3 from template.yml
stages:
- name: echo
script: echo 333 # Add stepMerged Equivalent Configuration:
main:
push:
pipeline_1: # Key doesn't exist, added during merge
stages:
- name: echo
script: echo 111
pipeline_2: # Key exists, content merged
env: # Object merge: same keys overwritten, new keys added
ENV_KEY1: xxx # From template.yml
ENV_KEY2: xxx # From .cnb.yml (added)
ENV_KEY3: outer # From .cnb.yml (overridden)
services: # Array merge: elements appended
- docker # From template.yml
stages: # Array merge: elements appended
- name: echo # From template.yml
script: echo 222
- name: echo # From .cnb.yml
script: echo 333Syntax Description
include supports three import methods:
include:
# 1. Directly pass configuration file path (string)
- "https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/template.yml"
- "template.yml" # Relative path (relative to repository root)
# 2. Pass object for more control
# path: Configuration file path
# ignoreError: Whether to ignore errors when file not found. true-ignore error; false-report error (default)
- path: "template1.yml"
ignoreError: true
# 3. Directly pass inline YAML configuration object
- config:
main:
push:
- stages:
- name: echo
script: echo "hello world"Merge Rules
Configurations between different files are merged according to the following rules:
| Scenario | Merge Result |
|---|---|
| Array + Array | Merge all elements (append) |
| Object + Object | Merge keys; same key values are overwritten |
| Array + Object or Object + Array | Final result is only array (object is ignored) |
Merge Order:
- Local
.cnb.ymlconfiguration overridesincludeimported configuration includearray configurations later override earlier ones
Permission Note
For security reasons, consistent with sensitive information protection principles, include cannot reference files stored in secret repositories, because the merged complete configuration will be displayed on the build details page.
Notes
- Nested includes are supported, but no more than 50 configuration files
- Include local file paths are relative to the project root directory
- Files in git submodules cannot be referenced
- Cross-file YAML anchors (
&,*) are not supported