FAQ
About 1396 wordsAbout 5 min
Before submitting feedback, it is recommended to check the corresponding documentation first and see if the question already exists in the following FAQs and feedback.
If you cannot find the answer, please submit an issue at feedback.
Pipeline
Why is the pipeline not triggered?
To locate the issue of pipeline not being triggered, you need to first understand the process from pipeline triggering to execution.
Taking the push event as an example:
The
skip detectionincludesifNewBranchandifModify. Please refer to syntax.
You can then trace through this process step by step:
- Is the code pushed to the remote repository?
- Does the corresponding branch have
.cnb.yml?- Do the included files have permission?
- Are there any format or syntax issues in the configuration content?
- Does the
.cnb.ymldeclare thepushevent pipeline for the current branch? - Did you hit the skip detection?
The default branch in CNB is usually
main. Some repositories migrated from other platforms may havemasteras the default branch. Please note the difference.
Works locally but fails in CI?
To locate this issue, you first need to understand the differences between the local environment and the CI environment:
| Local | CI Environment | |
|---|---|---|
| Network | Local network (e.g., office intranet) | CI machine's network |
| Files | All files in the local directory | Code of the corresponding branch in the GIT repository |
| Environment | Native | Specified Docker container environment |
| Shell | Local shell | sh |
Understanding these differences, we can troubleshoot in sequence:
- Are there any uncommitted files?
- Are the files required for build caught by .gitignore?
- Are you dependent on local-only resources (such as local APIs, credentials, etc.)?
- Does the cpus declared in CI meet the requirements?
- Run the same image locally to get the same build environment as CI, then debug.
The default image for Cloud Native Build is cnbcool/default-build-env.
You can execute the following command to enter the default CI environment for debugging:
docker run --rm -it -v $(pwd):$(pwd) -w $(pwd) cnbcool/default-build-env shIf you have declared another image as the pipeline build environment, please replace cnbcool/default-build-env in the above command with the corresponding image address.
How to log in to the pipeline container for debugging?
Please refer to Login Debug
Pipeline execution script results differ from login debug execution script results?
The default shell in the pipeline is sh, while login debugging uses bash.
If you confirm that the specified pipeline container supports bash (the default pipeline container does, but custom containers may not), you can change the execution script to bash xxx.sh or bash -c '{original statement}'.
Pipeline times out with no output?
If a job has no log output for more than 10 minutes, it will be terminated. You can consider adding more logs. For example, for npm install, you can add the verbose parameter.
Note that this is different from the timeout declared in the job's timeout setting and cannot be modified through configuration.
Why does the pipeline fail when I haven't changed any code?
You can check if dependent resources have changed, for example:
- Does the plugin task declare the image version as
latest, and has the image changed? - The CI configuration file references files from other repositories. Have the referenced files changed?
- It could be network fluctuation. Try rebuilding.
How to view the complete pipeline logs?
The CI service sends plugin tasks and script tasks to the Runner for execution. If the task logs are too long, they will be truncated and returned to the CI service.
A stage will summarize the logs of all tasks under it. If the stage logs are too long, they will also be truncated for better display on the log page.
After the build is complete, you can click the log download button in the top right corner of the pipeline on the log page to view the complete logs.
Why doesn't CNB provide a fixed exit IP?
CNB does not provide a fixed exit IP address. The exit IP address you use when accessing external services in the Cloud Native Build or Cloud Native Development service may change over time.
This is because:
- Using dynamic IP addresses reduces the risk of targeted attacks. When an exit IP is attacked or congested, it can be switched immediately, improving security and service quality.
- We do not recommend applying any whitelist logic to CNB's exit IP because CNB is an open platform that could be exploited by malicious attackers. Applying whitelist logic to CNB's exit IP is equivalent to whitelisting the entire public internet.
For cases where you genuinely need to whitelist a specific IP address range (e.g., to connect to internal services), you can consider the following methods:
- Use a private proxy: Configure a private proxy in your workflow to route your requests through it. The proxy can have a fixed IP address you specify.
- Use a jump server: Use a jump server to connect to the target machine and execute commands via ssh. You can use the cnbcool/ssh plugin to achieve this. Reference example
- For Tencent Cloud machines, it is recommended to use the tencentcom/tcloud-cmd plugin to implement remote command execution on CVM and Lighthouse servers. Using this plugin does not require whitelist configuration.
How to build multi-architecture/multi-platform images?
If you need to use buildx to build images for multiple architectures/platforms, you can enable the rootlessBuildkitd feature. You need to declare it in the service:
main:
push:
- docker:
image: golang:1.24
services:
- name: docker
options:
rootlessBuildkitd:
enabled: true
env:
IMAGE_TAG: ${CNB_DOCKER_REGISTRY}/${CNB_REPO_SLUG_LOWERCASE}:latest
stages:
- name: go build
script: ./build.sh
- name: docker login
script: docker login -u ${CNB_TOKEN_USER_NAME} -p "${CNB_TOKEN}" ${CNB_DOCKER_REGISTRY}
- name: docker build and push
script: docker buildx build -t ${IMAGE_TAG} --platform linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/arm64,linux/riscv64,linux/ppc64le,linux/s390x,linux/386,linux/mips64le,linux/mips64,linux/loong64,linux/arm/v7,linux/arm/v6 --push .Why is the repository storage usage in the pipeline inconsistent with what's shown on the page?
Generally, the repository size seen in the pipeline should be close to what is displayed on the page.
However, for all public forked repositories, to speed up the build process and optimize storage space, all forked repositories on the build nodes will reuse the .git directory of their ancestor repository, and then use OverlayFS technology to split into multiple copies for different repositories. Therefore, the repository size seen in the pipeline actually includes the size of the ancestor repository and all descendant repositories.
Additionally, for regular repositories, after triggering repository GC on the page, the repository size in the pipeline will not change immediately. You need to wait for the cache gc logic of the build machine itself to run before it matches what is displayed on the page.
Code Repository
How to resolve code merge conflicts?
If you encounter code conflict issues when creating a PR or in builds related to PR events, you can resolve them with the following commands:
git fetch origin # Get updates from the remote repository
git rebase -i origin/main # Actual target branch name
# Handle conflicts locally
git commit # Commit code to local repository based on actual situation
git push -f # Push to remote repositoryHow to completely delete files from a GIT repository and free up space?
Since GIT repositories support recovery of any historical commits, the .git directory stores all files that were ever committed. Simply deleting files from the working directory does not free up space.
Therefore, to completely delete files from a GIT repository and free up space, we recommend the following two methods:
Method 1 (simplest): Create a new repository on the remote, copy the files you need to it, then rename it to the original repository name.
Method 2 (more complex): Use the git filter-branch command or other third-party tools (such as BFG Repo-Cleaner). After completely deleting the corresponding files locally, force push to the remote, and then trigger the remote repository GC process.