Job

# Job Introduction

A Job is the basic unit of task execution and can be divided into three categories:

# Built-in Tasks

  • type:

    • type: String

    Specifies the built-in task to be executed in this step.

  • options:

    • type: Object

    Specifies the corresponding parameters for the built-in task.

  • optionsFrom:

    • Array<String> | String

    Specifies the local or Git repository file path to be loaded as parameters for the built-in task. Similar to the imports parameter, when configuring optionsFrom as an array, if there are duplicate parameters, the later configurations will override the earlier ones.

    The fields with the same name in options take precedence over optionsFrom.

    Refer to Authorization for Config File References for permission control when referencing configuration files.

Example:

name: install
type: INTERNAL_JOB_NAME
optionsFrom: ./options.json
options:
  key1: value1
  key2: value2
// ./options.json
{
  "key1": "value1",
  "key2": "value2"
}

# Script Tasks

- name: install
  script: npm install
  • script:

    • type: Array<String> | String

    Specifies the shell script to be executed in this step. Arrays are concatenated using && by default.

    If you want the script to have its own execution environment instead of executing in the pipeline environment, you can specify the execution environment using the image property.

  • image:

    • type: String

    Specifies the execution environment for the script.

Example:

- name: install
  image: node:20
  script: npm install

# Plugin Tasks

Plugins are Docker images, which can also be referred to as image tasks.

Unlike the previous two types of tasks, Plugin Tasks have the advantage of having more flexible execution environments. They are also easier to share within teams, companies, and even across CI systems.

Plugin Tasks achieve the goal of hiding internal implementation details by passing environment variables to the ENTRYPOINT.

Note: Custom environment variables set through parameters such as imports, env, etc., will not be passed to the plugin, but can be used in variable replacements in settings, args. CNB system environment variables will still be passed to the plugin.

  • name:

    • type: String

    Specifies the name of the Job.

  • image:

    • type: String

    The full path of the image.

  • settings:

    • type: Object

    Specifies the parameters for the image task. Fill in according to the documentation provided by the image provider. Environment variables can be accessed using $VAR or ${VAR}.

  • settingsFrom:

    • type: Array<String> | String

    Specifies the local or Git repository file path to be loaded as parameters for the image task.

    Priority:

    • If there are duplicate parameters, the later configurations will override the earlier ones.
    • Fields with the same name in settings take precedence over settingsFrom.

Refer to Authorization for Config File References for permission control when referencing configuration files.

Example:

Limit both images and slugs:

allow_slugs:
  - a/b
allow_images:
  - a/b

Limit only images, not slug:

allow_images:
  - a/b

settingsFrom can be written in the Dockerfile:

FROM node:20

LABEL cnb.cool/settings-from="https://xxx/settings.json"

# Example

With imports:

- name: npm publish
  image: plugins/npm
  imports: https://xxx/npm.json
  settings:
    username: $NPM_USER
    password: $NPM_PASS
    email: $NPM_EMAIL
    registry: https://mirrors.xxx.com/npm/
    folder: ./
{
  "username": "xxx",
  "password": "xxx",
  "email": "xxx@emai.com",
  "allow_slugs": ["cnb/**/**"],
  "allow_images": ["plugins/npm"]
}

With settingsFrom:

- name: npm publish
  image: plugins/npm
  settingsFrom: https://xxx/npm-settings.json
  settings:
    # username: $NPM_USER
    # password: $NPM_PASS
    # email: $NPM_EMAIL
    registry: https://mirrors.xxx.com/npm/
    folder: ./
{
  "username": "xxx",
  "password": "xxx",
  "email": "xxx@emai.com",
  "allow_slugs": ["cnb/cnb"],
  "allow_images": ["plugins/npm"]
}

# name

  • type: String

Specifies the name of the Job.

# ifModify

  • type: Array<String> | String

Same as Stage ifModify. Only applies to the current Job.

# ifNewBranch

  • type: Boolean
  • default: false

Same as Stage ifNewBranch. Only applies to the current Job.

# if

  • type: Array<String> | String

Same as Stage if. Only applies to the current Job.

# breakIfModify

  • type: Boolean
  • default: false

Same as Pipeline breakIfModify. The difference is that it only applies to the current Job.

# skipIfModify

  • type: Boolean
  • default: false

If the source branch has been updated before the execution of the Job, the current Job will be skipped.

# env

  • type: Object

Same as Stage env. Only applies to the current Job.

The priority of Job env is higher than Pipeline env and Stage env.

# imports

  • type: Array<String> | String

Same as Stage imports. Only applies to the current Job.

# exports

  • type: Object

Sets environment variables that are valid for the current Pipeline. See Modifying Environment Variables for details.

Note: After each task execution, there is a result object, which can be exported to environment variables through exports.

For example, to export the system's date to an environment variable, you can write:

name: export env
script: date
exports:
  info: CURR_DATE

By default, after a task is executed, the following variables can be referenced:

{
  code, // Task exit code
  info, // Content mixed with standard output and error output streams in chronological order
  stdout, // Content of the standard output stream
  stderr  // Content of the error output stream
}

How to add custom variables to the result object?

For example, how to pass custom variables in the script to environment variables, thereby affecting subsequent build tasks.

There are two ways to achieve this:

Here are the two ways:

  • Using set-output command to output custom variables to the result object:

CI will recognize the content in the format ##[set-output key=value] from the standard output stream and automatically put it into the result object. You can export it as an environment variable using exports. If the variable value contains newline characters \n, you can encode the value using base64 or escape.

If the variable value starts with base64,, the "Cloud Native Build" will decode the content after base64, as Base64. Otherwise, it will decode the variable value using unescape.

If you are using Node.js, the corresponding example code is as follows:

// test.js
const value = '测试字符串\ntest string';
// Output the base64 encoded variable value
console.log(`##[set-output redline_msg_base64=base64,${Buffer.from(value, 'utf-8').toString('base64')}]`);

// Output the escape encoded variable value
console.log(`##[set-output redline_msg_escape=${escape(value)}]`);
main:
  push:
    - docker:
        image: node:20-alpine
      stages:
        - name: set output env
          script: node test.js
          # Export the variables output by test.js as environment variables
          exports:
            redline_msg_base64: BASE64_KEY
            redline_msg_escape: ESCAPE_KEY
        - name: echo env
          script:
            - echo "BASE64_KEY $BASE64_KEY"
            - echo "ESCAPE_KEY $ESCAPE_KEY"

Using echo directly in the shell:

main:
  push:
    - stages:
        - name: set output env
          script: echo "##[set-output redline_msg_base64=base64,$(echo -n "测试字符串\ntest string" | base64)]"
          exports:
            redline_msg_base64: BASE64_KEY
        - name: echo env
          script:
            - echo -e "BASE64_KEY $BASE64_KEY"

For variable values that do not contain \n, you can directly output them:

echo "##[set-output redline_msg=some value]"

TIP

Please note that there is a limitation on the length of system environment variable values.

CI will ignore variable values that are greater than or equal to 100KB.

In such cases, you can write the value to a file and parse it manually.

For sensitive information, it is recommended to use the read-file built-in task.

  • Result of built-in tasks

You can see the description of the output for each built-in task in the documentation.

# timeout

  • type: Number | String

Sets the timeout for a single task. The default timeout is 1 hour, and the maximum timeout is 12 hours.

It is applicable to script-job and image-job.

The following units are supported:

  • ms: milliseconds (default)
  • s: seconds
  • m: minutes
  • h: hours
name: timeout job
script: sleep 1d
timeout: 100s # The task will time out and exit after 100 seconds

See Timeout Strategy for more details.

# allowFailure

  • type: Boolean | String
  • default: false

If set to true, it means that if this step fails, it will not affect the execution of the subsequent workflow and will not affect the final result.

When the value is of type String, it can read environment variables.

# lock

  • type: Object | Boolean

Sets a lock for the Job, and the lock will be automatically released after the Job is executed. The lock cannot be used across repositories.

Behavior: When Task A acquires the lock, Task B requests the lock and waits for the lock to be released before it can acquire the lock and continue executing the task.

  • lock.key

    • type: String

Custom lock name. The default is branchName-pipelineName-stageIndex-jobName.

  • lock.expires

    • type: Number
    • default: 3600 (1 hour)

Lock expiration time in seconds. The lock will be automatically released after expiration.

  • lock.wait

    • type: Boolean
    • default: false

Whether to wait if the lock is occupied.

  • lock.timeout

    • type: Number
    • default: 3600 (1 hour)

Specifies the timeout for waiting for the lock in seconds.

Example 1: lock is in Boolean format

name: lock
lock: true
script: echo 'job lock'

Example 2: lock is in Object format

name: lock
lock:
  key: key
  expires: 10
  wait: true
script: echo 'job lock'

# retry

  • type: Number
  • default: 0

Number of retry attempts for failure. 0 means no retry.

# type

  • type: String

Specifies the built-in task to be executed in this step.

# options

  • type: Object

Specifies the parameters for the built-in task.

# optionsFrom

  • type: Array<String> | String

Specifies the local or Git repository file path to be loaded as parameters for the built-in task. Similar to the imports parameter, when configuring optionsFrom as an array, if there are duplicate parameters, the later configuration will override the earlier one.

Fields with the same name as options take precedence over optionsFrom.

# script

  • type: Array<String> | String

Specifies the script to be executed by the task. When it is an array, it will be automatically concatenated with &&. The exit code of the script execution process will be used as the exit code of this Job.

# commands

  • type: Array<String> | String

Same as the script parameter, but with higher priority than script. Mainly used for compatibility with Drone CI syntax.

# image

  • type: Object | String

Specify which image to use as the current execution environment for the job, used for docker image as env or docker image as plugins.

  • image.name: string Image name, such as node:20.
  • image.dockerUser: String Specifies the Docker username used to pull the specified image.
  • image.dockerPassword: String Specifies the Docker user password used to pull the specified image.

If image is specified as a string, it is equivalent to specifying image.name.

# settings

  • type: Object

Specifies the parameters required for executing the image task. See Plugin Task Introduction for details.

# settingsFrom

  • Array<String> | String

Specifies the local or Git repository file path to be loaded as parameters for the image task. Similar to the imports parameter, when configuring settingsFrom as an array, if there are duplicate parameters, the later configuration will override the earlier one.

See Plugin Task Introduction for details.

# args

  • Array<String>

Specifies the parameters passed when executing the image. The content will be appended to ENTRYPOINT, and only supports arrays.

- name: npm publish
  image: plugins/npm
  args:
    - ls

This will execute:

docker run plugins/npm ls

# Task Exit Codes

  • 0: Task succeeded, continue execution.
  • 78: Task succeeded, but interrupts the current Pipeline execution. You can manually execute exit 78 in a custom script to interrupt the pipeline.
  • other: Number, Task failed and interrupts the current Pipeline execution.