Environment Variables
About 1713 wordsAbout 6 min
Cloud Native Build not only supports declaring the build environment but also allows defining environment variables used within the build environment.
Cloud Native Build comes with some default environment variables for direct use.
Declaring Environment Variables
Declare environment variables via env
- Environment variables declared in
Pipelineare effective for the entire pipeline. - Environment variables declared in
Jobare only effective for the current task.
main:
push:
- docker:
# Declare build environment
image: node:22
# Environment variables declared at the pipeline level are available to all tasks in the pipeline
env:
PIPELINE_ENV_1: pipeline environment variable 1
PIPELINE_ENV_2: pipeline environment variable 2
stages:
- name: Output build environment information
script: node -v
- name: Output pipeline environment variables
script:
- echo $PIPELINE_ENV_1
- echo $PIPELINE_ENV_2
- name: Output job environment variables
# Environment variables declared in the job are only effective for the current task
env:
JOB_ENV: job environment variable
script: echo $JOB_ENVImporting Environment Variables
- Use
importsto import a secrets file, injecting sensitive information into environment variables for subsequent tasks - When there's a conflict between
envandimportskeys,envtakes precedence
main:
push:
- services:
- docker
# Import secrets file as environment variables
imports: https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/envs.yml
stages:
- name: docker info
script: docker info
- name: docker login
# Where TEST_DOCKER_DOMAIN, TEST_DOCKER_USER, and TEST_DOCKER_PWD are variables defined in the secret repository file.
script: docker login $TEST_DOCKER_DOMAIN -u $TEST_DOCKER_USER -p $TEST_DOCKER_PWDExample content of https://cnb.cool/<your-repo-slug>/-/blob/main/xxx/envs.yml:
# Docker registry domain
TEST_DOCKER_DOMAIN: registry.example.com
# Docker username
TEST_DOCKER_USER: your_docker_username
# Docker password
TEST_DOCKER_PWD: your_docker_passwordVariable Naming Restrictions
In Shell, environment variable names have some restrictions. According to POSIX standards, environment variable names should follow these rules:
- Can only contain letters (upper/lower case), numbers and underscore (_) characters
- First character cannot be a number
Variables that don't meet these rules will be ignored
Exporting Environment Variables
After a Job completes execution, there is a result object. You can use exports to export properties from result to environment variables, with a lifecycle limited to the current Pipeline.
Syntax format:
exports:
from-key: to-keyfrom-key: The property name from theJob resultobject to export, supports environment variables and deep property access (similar tolodash.get)to-key: The environment variable name to map to
There are three ways to set result:
- Script task execution results
- Parsing custom variables from output
- Built-in task results
Script Task Execution Results
After a script task executes, the Job result has these properties:
code: Return codestdout: Standard outputstderr: Standard errorinfo: Mixed output of stdout and stderr in chronological order
Note: Use printf "%s" "hello\nworld" to output variables, which removes trailing newlines while preserving escape characters like \n.
main:
push:
- stages:
- name: set env
script: echo -n $(date "+%Y-%m-%d %H:%M")
exports:
code: CUSTOM_ENV_DATE_CODE
info: CUSTOM_ENV_DATE_INFO
- name: echo env
script:
- echo $CUSTOM_ENV_DATE_CODE
- echo $CUSTOM_ENV_DATE_INFOFor tasks with if, ifModify, ifNewBranch etc. logic, you can set:
skip: If the task is skipped due to these conditions, contains the skip reason, otherwise empty string
- name: use if
if: exit -1
exports:
skip: REASON
- name: tell last
# $REASON value is the string "if"
script: echo $REASONParsing Custom Variables from Output
CI will parse lines in stdout matching ##[set-output key=value] format and automatically add them to the result object.
If variable values contain newlines \n, you can encode them with base64 or escape.
If a variable value starts with base64,, Cloud Native Build will decode the content after base64, as base64. Otherwise it will unescape the variable value.
Example Node.js code:
// test.js
const value = 'Test string\ntest string';
// Output base64 encoded variable
console.log(`##[set-output redline_msg_base64=base64,${Buffer.from(value, 'utf-8').toString('base64')}]`);
// Output escape encoded variable
console.log(`##[set-output redline_msg_escape=${escape(value)}]`)main:
push:
- docker:
image: node:20-alpine
stages:
- name: set output env
script: node test.js
# Export variables from test.js as environment variables
exports:
redline_msg_base64: BASE64_KEY
redline_msg_escape: ESCAPE_KEY
- name: echo env
script:
- echo "BASE64_KEY $BASE64_KEY"
- echo "ESCAPE_KEY $ESCAPE_KEY"Example using echo:
main:
push:
- stages:
- name: set output env
script: echo "##[set-output redline_msg_base64=base64,$(echo -e "Test string\ntest string" | base64 -w 0)]"
exports:
redline_msg_base64: BASE64_KEY
- name: echo env
script:
- echo -e "BASE64_KEY $BASE64_KEY"Note: On Unix-like systems, the base64 command adds newlines every 76 characters by default. Use -w 0 to disable line wrapping to ensure CI can parse variables correctly.
For values without \n, you can output directly:
echo "##[set-output redline_msg=some value]"Tips
Due to system environment variable length limits, excessively large variable values are invalid.
CI will ignore variable values >= 100KB. For large values, write to files and parse them yourself.
For sensitive information, consider using the read-file built-in task.
Exporting Environment Variables from Built-in Tasks
Some built-in tasks have output results that can be exported as environment variables via exports.
main:
push:
- stages:
- name: xxxx
type: xxx:xxx
options:
product: public
name: cnb
dist: release/
exports:
version: CUSTOM_ENV_VERSION
url: CUSTOM_ENV_URL
# Supports deep object property access
nextRelease.gitTag: CUSTOM_ENV_GIT_TAG
- name: echo env
script:
- echo $CUSTOM_ENV_VERSION
- echo $CUSTOM_ENV_URLRefer to each built-in task's documentation for result content.
Managing Environment Variables
You can override existing environment variables. Setting to empty string or null removes them.
main:
push:
- env:
CUSTOM_ENV_DATE_INFO: default
CUSTOM_ENV_FOR_DELETE: default
stages:
- name: set env
script: echo -n $(date "+%Y-%m-%d %H:%M")
exports:
# Add new
code: CUSTOM_ENV_DATE_CODE
# Modify
info: CUSTOM_ENV_DATE_INFO
# Delete
CUSTOM_ENV_FOR_DELETE: null
# Alternative delete syntax
# CUSTOM_ENV_FOR_DELETE:
- name: echo env
script:
- echo $CUSTOM_ENV_DATE_CODE
- echo $CUSTOM_ENV_DATE_INFO
- echo $CUSTOM_ENV_DATE_STDOUT
- echo $CUSTOM_ENV_FOR_DELETE
- echo $CUSTOM_ENV_GIT_TAGUsing Environment Variables
In Script Tasks
When executing script tasks, pipeline environment variables are available as task execution environment variables
main:
push:
- stages:
- name: test internal env
# CNB_BRANCH is a built-in environment variable
script: echo $CNB_BRANCH
- name: test self defined env
env:
cat_name: tomcat
script: echo $cat_nameVariable Substitution
Some property values in configuration files undergo variable substitution.
If there's an environment variable env_name=env_value, then $env_name in property values will be replaced with env_value. If env_name has no value, it's replaced with empty string.
The following properties support variable substitution:
- Built-in tasks
Properties in built-in task options and optionsFrom undergo substitution.
# options.yml
name: Nightlymain:
push:
- env:
address: options.yml
description: publish for xx task
stages:
- name: git release
type: git:release
# $address is replaced with "options.yml" from env
optionsFrom: $address
# name from options.yml is merged into options
options:
# $description is replaced with "publish for xx task" from env
description: $descriptionFinal options content:
name: Nightly
description: publish for xx task- Plugin tasks
Properties in plugin task settings and settingsFrom undergo substitution.
# settings.yml
robot: https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=xxxmain:
push:
- env:
address: settings.yml
message: pr check
stages:
- name: notify
image: tencentcom/wecom-message
# $address is replaced with "settings.yml" from env
settingsFrom: $address
# robot from settings.yml is merged into settings
settings:
# $message is replaced with "pr check" from env
content: $messageFinal settings content:
robot: https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=xxx
message: pr checkAdditionally, settingsFrom specified in Dockerfile LABEL also supports substitution
FROM node:20
LABEL cnb.cool/settings-from="$address"- env
Property values under env can reference variables from parent env for substitution
main:
push:
- env:
cat_name: tomcat
stages:
- name: echo env
env:
# Use cat_name from parent env
name: "cat $cat_name"
# Outputs "cat tomcat"
script: echo $name- imports
Property values in imports and in imported files support substitution.
If imports is an array, variables declared in earlier files affect later array elements.
# env1.yml
address: env2.yml
platform: amd64# env2.yml
# Reads platform from env1.yml for substitution
action: build for $platformmain:
push:
- imports:
- env1.yml
# env1.yml declares address, $address is replaced with env2.yml
- $address
stages:
- name: echo action
# Reads action from env2.yml
script: echo $action- pipeline.runner.tags
# Build images for different architectures
.build: &build
runner:
tags: cnb:arch:$CNB_PIPELINE_NAME
services:
- docker
stages:
- name: docker build
script: echo "docker build for $CNB_PIPELINE_NAME"
main:
push:
# "amd64" and "arm64:v8" are set as CNB_PIPELINE_NAME values
amd64: *build
"arm64:v8": *build- volumes, image, and build under pipeline.docker
.docker-volume: &docker-volume
docker:
# Choose either image or build
image: $image
build: $build
volumes:
- $volume_path
main:
push:
install:
env:
volume_path: node_modules
# If image is empty, a build will be used to create an image
# image: node:22-alpine
build: Dockerfile
<<: *docker-volume
stages:
- name: install
script: npm install axios
# Notify other pipelines to execute
- name: resolve
type: cnb:resolve
options:
key: install
build:
env:
volume_path: node_modules
image: node:22-alpine
# If build is empty, image will be used
# build: Dockerfile
<<: *docker-volume
stages:
# Wait for the install pipeline
- name: await
type: cnb:await
options:
key: install
- name: ls
script: ls node_modules- stage.image and job.image
Refer to the configuration example of the built-in task docker:cache in Internal Steps README.
- ifModify
# Only build modules with code changes
.build: &build
ifModify: $CNB_PIPELINE_NAME/*
stages:
- name: build $CNB_PIPELINE_NAME
script: echo "build $CNB_PIPELINE_NAME"
main:
push:
module-a: *build
module-b: *build- name
pipeline.name, stage.name and job.name support substitution.
main:
push:
- name: build in $CNB_REPO_SLUG
env:
platform: amd64
imports:
- env1.yml
- env2.yml
stages:
- name: stage_$SOME_ENV
script: echo "hello world"- lock.key
# env.yml
build_key: build key.build: &build
imports: env.yml
lock:
key: $build_key
stages:
- name: echo
script: echo "hello world"
main:
push:
# Of these two pipelines, one will acquire the lock and execute, the other will fail
- *build
- *build- allowFailure
main:
push:
- env:
allow_fail: true
stages:
- name: echo
allowFailure: $allow_fail
# Script will error but allowFailure is true, so task is considered successful
script: echo1 1Preventing Variable Substitution
To prevent $env_name from being substituted, escape it with \$
main:
push:
- stages:
- name: git release
type: git:release
options:
name: Development
# Property value is "some code update $description"
description: some code update \$descriptionLimitations
Environment variable names must consist of letters, numbers or _, and cannot start with a number.
Variable values cannot exceed 100KiB in length.