-
Notifications
You must be signed in to change notification settings - Fork 10
juni build fails when executed as part of a github action #53
Description
Hi everyone!
We've been trying to run juni build as part of our CI pipeline on github action.
The action itself can be found here: https://github.com/YouPrice/juni-build-github-action
The integration with our pipeline is as follows (the file has been redacted for brevity):
name: ...
on:
push:
branches: [main]
jobs:
generate-pkg:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: YouPrice/juni-build-github-action@main
- name: stage changed files
run: |
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git add ./dist/router.zip
- name: commit changed files
run: git commit -m "Auto updating lambda package"
- name: fetch from main
run: git fetch origin main
- name: push code to main
run: git push origin main
As you can see, we want to run juni build from inside the CI, and then commit & push the generated artifact.
However, the command gets stuck at the following point (github actions log):
Run YouPrice/juni-build-github-action@main
[...]
Removing network workspace_default
Network workspace_default not found.
Creating network "workspace_default" with the default driver
Pulling router-lambda (lambci/lambda:build-python3.6)...
build-python3.6: Pulling from lambci/lambda
Creating workspace_router-lambda_1 ...
Digest: sha256:9b1cea555bfed62d1fc9e9130efa9842ee144ef02e2a6a266f1c9e6adeb0866f
Status: Downloaded newer image for lambci/lambda:build-python3.6
Creating workspace_router-lambda_1 ... done
Attaching to workspace_router-lambda_1
router-lambda_1 | sh: /var/task/bin/package.sh: No such file or directory
workspace_router-lambda_1 exited with code 127
From what we can see, juniper manages to build the "inner" container just fine, but fails at retrieving the package.sh script that's created inside the temporary .juni directory. Running ls -a inside the container a few seconds after juni build shows that the .juni folder is indeed there.
How could we ensure that the package.sh is mounted correctly and accessible from within the innermost docker container?
Apologies if this is a tricky use-case, and thanks in advance! :)