Nx - Run Commands - Docker - docker

I'm currently trying to configure my workspace.json file to include the commands to build a docker image and push it. These commands work running them through the terminal (bash or windows), but I would like to get them setup in the cli so I can pass in the appName.
Project Structure
---/
Dockerfile.api
workspace.json
....
Configuration
{
"version": 1,
"projects": {
"api": {
"heroku-deploy": {
"builder": "#nrwl/workspace:run-commands",
"options": {
"commands": [
{
"command": "docker build -t registry.heroku.com/{args.appName}/web -f ./Dockerfile.api ."
},
{
"command": "docker push registry.heroku.com/{args.appName}/web"
},
{
"command": "heroku container:release web -a {args.appName}"
}
]
}
}
}
}
}
Running the command
nx run api:heroku-deploy2 --args="--appName=nx-api-leopard" --verbose
Error
#1 [internal] load build definition from Dockerfile.web
#1 sha256:8f9045a5ed51569cdcff9c6e9f9052e7e724435b6f2eeec087ea2770af2a3b0d
#1 transferring dockerfile: 2B done
#1 DONE 0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount178807147/Dockerfile.web: no such file or directory
Warning: #nrwl/run-commands command "docker build -t registry.heroku.com/my-nx-app/web -f ./Dockerfile.web ." exited with non-zero status code
Resources
NX Run Commands

Related

VS-Code multistage build fails with "No such image: scratch"

When testing a minimal dev container with multistage build it fails when using "scratch".
My system:
Windows 11
Docker desktop 4.3.2
WSL Ubuntu 20.04
VS-Code v1.74.3
Docker extension v1.23.3
Dev Containers extension v0.266.1
in .devcontainer.json just call Dockerfile:
{
"build": {
"dockerfile": "Dockerfile"
}
}
Dockerfile:
FROM mcr.microsoft.com/vscode/devcontainers/base:bionic as base
FROM scratch as final
COPY --from=base / /
CMD [ "/bin/sh" ]
In VS Code -> F1 -> Reopen in container
Log output:
[1170076 ms] Start: Run: docker version --format {{.Server.APIVersion}}
[1170321 ms] 1.41
[1170410 ms] Start: Run: C:\Program Files\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\thorgrim\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js read-configuration --workspace-folder c:\Users\thorgrim\Wago Norge Dropbox\Thorgrim Jansrud\AT_WORK\GitHub\dev-container-docker-desktop-howto --log-level debug --log-format json --config c:\Users\thorgrim\Wago Norge Dropbox\Thorgrim Jansrud\AT_WORK\GitHub\dev-container-docker-desktop-howto\.devcontainer\devcontainer.json --include-merged-configuration --mount-workspace-git-root true
[1170689 ms] (node:15500) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
[1170689 ms] (Use `Code --trace-deprecation ...` to show where the warning was created)
[1170693 ms] #devcontainers/cli 0.25.2. Node.js v16.14.2. win32 10.0.22000 x64.
[1170693 ms] Start: Run: git rev-parse --show-cdup
[1170747 ms] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=c:\Users\thorgrim\Wago Norge Dropbox\Thorgrim Jansrud\AT_WORK\GitHub\dev-container-docker-desktop-howto
[1171060 ms] Start: Run: docker inspect --type image scratch
[1178824 ms] Error fetching image details: No manifest found for docker.io/library/scratch.
[1178824 ms] Start: Run: docker pull scratch
Using default tag: latest
Error response from daemon: 'scratch' is a reserved name
[1179391 ms] []
[1179391 ms] Error: No such image: scratch
[1179391 ms] Command failed: docker inspect --type image scratch
[1179402 ms] Exit code 1
What works as a workaround is:
building with the vs code docker extension -> right click Dockerfile -> build image.
manually by cli -> Docker build .
Cant figure out why the dev container extension does not build this image correct when using "FROM scratch" as second stage. Other images works (e.g. FROM base) but I know the "From scratch" is not a real image that can be pulled..

Building devcontainer with --ssh key for GitHub repository in build process fails on VS Code for ARM Mac

We are trying to run a python application using a devcontainer.json with VS Code.
The Dockerfile includes the installation of GitHub repositories with pip that require an ssh key. To build the images, we usually use the --ssh flag to pass the required key. We then use this key to run pip inside the Dockerfile as follows:
RUN --mount=type=ssh,id=ssh_key python3.9 -m pip install --no-cache-dir -r pip-requirements.txt
We now want to run a devcontainer.json inside VS Code. We have been trying many different ways.
1. Passing the --ssh key using the build arg variable:
Since you can not directly pass the --ssh key, we tried a workaround:
"args": {"kek":"kek --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa"}
This produces an OK looking build command that works in a normal terminal, but inside VS Code the key is not being passed and the build fails. (Both on Windows & Mac)
2. Putting an initial build command into the initializeCommand parameter and then a simple build command that should use the cached results:
We run a first build inside the initializeCommand parameter:
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa ."
and then we have a second build in the regular parameter:
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
}
This solution is a nice workaround and works flawlessly on Windows. On the ARM Mac, however, only the initializeCommand build stage runs well, the actual build fails, as it does not use the cached version of the images. The crucial step when the --ssh key is used, fails just like described before.
We have no idea why the Mac VS Code ignores the already created images. In a regular terminal, again, the second build command generated by VS Code works flawlessly.
The problem is reproducible on different ARM Macs, and on different repositories.
Here is the entire devcontainer:
{
"name": "Dockername",
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
},
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa .",
"runArgs": ["--env-file", "configuration.env", "-t"],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
]
}
}
}
So, we finally found a work around:
We add a target to the initialize command:
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa -t dev-image ."
We create a new Dockerfile Dockerfile-devcontainer that only uses one line:
FROM --platform=linux/amd64 docker.io/library/dev-image:latest
In the build command of the devcontainer use that Dockerfile:
"name": "Docker",
"initializeCommand": "docker buildx build --platform=linux/amd64 --ssh ssh_key=/Users/user/.ssh/id_rsa -t dev-image:latest .",
"build": {
"dockerfile": "Dockerfile-devcontainer",
"context": "..",
"args": {"kek":"kek --platform=linux/amd64"}
},
"runArgs": ["--env-file", "configuration.env"],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
]
}
}
}
In this way we can use the .ssh key and the docker image created in the initializeCommand (Tested on MacOS and Windows).

vscode -- How to run `docker` in a task ? -- Docker-Build-Task does not work

Situation and Problem
I am running macOS Mojave 10.14.5, upgraded bash like described here and have a TeXlive docker container (basically that one), that I want to call to typeset LaTeX files. This does work very well and also execution with this following tasks.json worked flawlessly up unti some recent update (that I cannot pin down, as I am not using this daily).
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "runit",
"group": {
"kind": "build",
"isDefault": true
},
"command": "docker",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
"problemMatcher": []
},
{
"type": "shell",
"label": "test",
"command": "echo",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
}
]
}
Trying to run docker yields a "command not found" :
> Executing task: docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
/usr/local/bin/bash: docker: command not found
The terminal process command '/usr/local/bin/bash -c 'docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex'' failed to launch (exit code: 127)
... while trying to echo, works just fine.
> Executing task: echo run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex
Even though, it once worked just like described above and the very same command works in the terminal, it fails now if I execute it as a build-task. Hence, my
Question
How to use docker in a build-task ?
or fix the problem in the above set up.
additional notes
Trying the following yielded the same "command not found"
{
"type": "shell", "label": "test",
"command": "which", "args": ["docker"]
}
... even though this works:
bash$ /usr/local/bin/bash -c 'which docker'
/usr/local/bin/docker
bash$ echo $PATH
/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
edit: One more note:
I am using a context entry to start vscode with an automator script that runs the following bash command with the element 'right-clicked' passed as the variable:
#!/bin/sh
/usr/local/bin/code -n "$1"
So since there hasn't been any progress here and I got help on GitHub: I will just answer myself such that others led here searching for a solution won't be let down.
Please give all the acknowledgement to joaomoreno for his answer here
Turns out that by starting vscode via a context-entry there is some issue with an enviroment variable. Starting like this fixed that problem thus far:
#!/bin/sh
VSCODE_FORCE_USER_ENV=1 /usr/local/bin/code -n "$1"

Ansible not executing main.yml

I am using Ansible local inside a Packer script to configure a Docker image. I have a role test that has a main.yml file that's supposed to output some information and create a directory to see that the script actually ran. However, the main.yml doesn't seem to get run.
Here is my playbook.yml:
---
- name: apply configuration
hosts: all
remote_user: root
roles:
- test
test/tasks/main.yml:
---
- name: Test output
shell: echo 'testing output from test'
- name: Make test directory
file: path=/test state=directory owner=root
When running this via packer build packer.json I get the following output from the portion related to Ansible:
docker: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/playbook.yml --extra-vars "packer_build_name=docker packer_builder_type=docker packer_http_addr=" -c local -i /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/packer-provisioner-ansible-local037775056
docker:
docker: PLAY [apply configuration] *****************************************************
docker:
docker: TASK [setup] *******************************************************************
docker: ok: [127.0.0.1]
docker:
docker: PLAY RECAP *********************************************************************
docker: 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
I used to run a different more useful role this way and it worked fine. I hadn't run this for a few months and now it stopped working. Any ideas what I am doing wrong? Thank you!
EDIT:
here is my packer.json:
{
"builders": [
{
"type": "docker",
"image": "ubuntu:latest",
"commit": true,
"run_command": [ "-d", "-i", "-t", "--name", "{{user `ansible_host`}}", "{{.Image}}", "/bin/bash" ]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get -y update",
"apt-get -y install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/playbook.yml",
"playbook_dir": "ansible",
"role_paths": [
"ansible/roles/test"
]
}
]
}
This seems to be due to a bug in Packer. Everything works as expected with any Packer version other than 1.0.4. I recommend either downgrading to 1.0.3 or installing the yet to be released 1.1.0 version.
My best guess is that this is being caused by a known and fixed issue about how directories get copied by the docker builder when using Ansible local provisioner.

Packer shell provisioning hangs when building Docker container

I'm trying to build and provision a docker container, but when I run this it hangs when running the provisioning script.
I'm running on a OSX using:
Boot2Docker-cli version: v1.3.1
Packer v0.7.2
**docker version**
Client version: 1.3.1
Client API version: 1.15
Server version: 1.3.1
Server API version: 1.15
Running this:
packer build ./packer-build-templates/docker/testsite/testsite.json
packer-build-templates/docker/testsite/testsite.json
{
"builders": [
{
"type": "docker",
"image": "centos:centos6",
"commit": "true"
}
],
"provisioners": [
{
"type": "shell",
"script": "script.sh"
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "test/stuff",
"tag": "latest"
}
]
]
}
script.sh
#!/bin/sh -x
echo foo
Output:
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: centos:centos6
docker: centos:centos6: The image you are pulling has been verified
docker: Status: Image is up to date for centos:centos6
==> docker: Starting docker container...
docker: Run command: docker run -v /var/folders/z2/nm_4_yyx2ss9z8wn4h0bfd1jw_pj8j/T/packer-docker208208953:/packer-files -d -i -t centos:centos6 /bin/bash
docker: Container ID: 3ab21c7c21bc4af84e0f0c7bdbac91ee600d1ea0a469bfa51a959faba73fa7e4
==> docker: Provisioning with shell script: script.sh
This is as far as it gets. Then it just sits there. Any idea what's going on here?

Resources