vscode -- How to run `docker` in a task ? -- Docker-Build-Task does not work - docker

Situation and Problem
I am running macOS Mojave 10.14.5, upgraded bash like described here and have a TeXlive docker container (basically that one), that I want to call to typeset LaTeX files. This does work very well and also execution with this following tasks.json worked flawlessly up unti some recent update (that I cannot pin down, as I am not using this daily).
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "runit",
"group": {
"kind": "build",
"isDefault": true
},
"command": "docker",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
"problemMatcher": []
},
{
"type": "shell",
"label": "test",
"command": "echo",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
}
]
}
Trying to run docker yields a "command not found" :
> Executing task: docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
/usr/local/bin/bash: docker: command not found
The terminal process command '/usr/local/bin/bash -c 'docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex'' failed to launch (exit code: 127)
... while trying to echo, works just fine.
> Executing task: echo run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex
Even though, it once worked just like described above and the very same command works in the terminal, it fails now if I execute it as a build-task. Hence, my
Question
How to use docker in a build-task ?
or fix the problem in the above set up.
additional notes
Trying the following yielded the same "command not found"
{
"type": "shell", "label": "test",
"command": "which", "args": ["docker"]
}
... even though this works:
bash$ /usr/local/bin/bash -c 'which docker'
/usr/local/bin/docker
bash$ echo $PATH
/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
edit: One more note:
I am using a context entry to start vscode with an automator script that runs the following bash command with the element 'right-clicked' passed as the variable:
#!/bin/sh
/usr/local/bin/code -n "$1"

So since there hasn't been any progress here and I got help on GitHub: I will just answer myself such that others led here searching for a solution won't be let down.
Please give all the acknowledgement to joaomoreno for his answer here
Turns out that by starting vscode via a context-entry there is some issue with an enviroment variable. Starting like this fixed that problem thus far:
#!/bin/sh
VSCODE_FORCE_USER_ENV=1 /usr/local/bin/code -n "$1"

Related

How do I get a Command to run from a Dockerfile.aws.json on Elastic Beanstalk?

I have a Dockerfile and a Dockerfile.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "5000",
"HostPort": "5000"
}],
"Volumes": [{
"HostDirectory": "/tmp/download/models",
"ContainerDirectory": "/models"
}],
"Logging": "/var/log/nginx",
"Command": "mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip"
}
But when I deploy, it doesn't run the Command that I specified. What am I doing wrong?
If you have ENTRYPOINT in your Dockerfile, than the Command gets appended as its arguments:
Specify a command to execute in the container. If you specify an Entrypoint, then Command is added as an argument to Entrypoint. For more information, see CMD in the Docker documentation.
Thus your Command mkdir -p /tmp ... will be used as an argument to python3 -m flask run --host=0.0.0.0, resulting in error. This could explain why you experience issue.
I tried to recreate the issue initially using your Command structure but had some problems. What worked was using Command in the following way:
"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip\""
My Dockerfile did not have Entrypoint. Thus to run your python you could maybe do the following (assuming everything else is correct):
"Command": "/bin/bash -c \"mkdir -p /tmp && axel https://example.com/models.zip -o /tmp/models.zip && python3 -m flask run --host=0.0.0.0\""
Do you have the Dockerfile content?
It most likely they your ENTRYPOINT script does not receive parameters, or it is ignoring it.
What you can do is something similar to this.
You have an entrypoint script that receive the command passed in aws.json as parameter, execute it and then call your real python command.
Or you can replace your ENTRYPOINT by something similar to this:
ENTRYPOINT ["/bin/bash"]
and your default command will be:
CMD ["python3 ..."]
This way when running locally you only run the python3 command.
When running in aws, you can change your Command and append the python to the end, as mentioned by Marcin. Both cases works

Packer fails my docker build with error "sudo: not found" despite sudo being present

I'm trying to build a packer image with docker on it and I want docker to create a docker image with a custom script. The relevant portion of my code is (note that the top builder double-checks that sudo is installed):
{
"type": "shell",
"inline": [
"apt-get install sudo"
]
},
{
"type": "docker",
"image": "python:3",
"commit": true,
"changes": [
"RUN pip install Flask",
"CMD [\"python\", \"echo.py\"]"
]
}
The relevant portion of my screen output is:
==> docker: provisioning with shell script: /var/folders/s8/g1_gobbldygook/T/packer-shell23453453245
docker: /temp/script_1234.sh: 3: /tmp/script_1234.sh: sudo: not found
==> docker: killing the contaner: 34234hashvomit234234
Build 'docker' errored: Scipt exited with non-zero exit status: 127
The script in question is not one of mine. It's some randomly generated script that has a different series of four numbers every time I build. I'm new to both packer and docker, so maybe it's obvious what the problem is, but it's not to me.
There seem to be a few problems with your packer input. Since you haven't included the complete input file it's hard to tell, but notice a couple of things:
You probably need to run apt-get update before calling apt-get install sudo. Without that, even if the image has cached package metadata it is probably stale. If I try to build an image using your input it fails with:
E: Unable to locate package sudo
While not a problem in this context, it's good to explicitly include -y on the apt-get command line when you're running it non-interactively:
apt-get -y install sudo
In situations where apt-get is attached to a terminal, this will prevent it from prompting for confirmation. This is not a necessary change to your input, but I figure it's good to be explicit.
Based on the docs and on my testing, you can't include a RUN statement in the changes block of a docker builder. That fails with:
Stderr: Error response from daemon: run is not a valid change command
Fortunately, we can move that pip install command into a shell provisioner.
With those changes, the following input successfully builds an image:
{
"builders": [{
"type": "docker",
"image": "python:3",
"commit": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"apt-get update",
"apt-get -y install sudo",
"pip install Flask"
]
}],
"post-processors": [[ {
"type": "docker-tag",
"repository": "packer-test",
"tag": "latest"
} ]]
}
(NB: Tested using Packer v1.3.5)

Ansible not executing main.yml

I am using Ansible local inside a Packer script to configure a Docker image. I have a role test that has a main.yml file that's supposed to output some information and create a directory to see that the script actually ran. However, the main.yml doesn't seem to get run.
Here is my playbook.yml:
---
- name: apply configuration
hosts: all
remote_user: root
roles:
- test
test/tasks/main.yml:
---
- name: Test output
shell: echo 'testing output from test'
- name: Make test directory
file: path=/test state=directory owner=root
When running this via packer build packer.json I get the following output from the portion related to Ansible:
docker: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/playbook.yml --extra-vars "packer_build_name=docker packer_builder_type=docker packer_http_addr=" -c local -i /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/packer-provisioner-ansible-local037775056
docker:
docker: PLAY [apply configuration] *****************************************************
docker:
docker: TASK [setup] *******************************************************************
docker: ok: [127.0.0.1]
docker:
docker: PLAY RECAP *********************************************************************
docker: 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
I used to run a different more useful role this way and it worked fine. I hadn't run this for a few months and now it stopped working. Any ideas what I am doing wrong? Thank you!
EDIT:
here is my packer.json:
{
"builders": [
{
"type": "docker",
"image": "ubuntu:latest",
"commit": true,
"run_command": [ "-d", "-i", "-t", "--name", "{{user `ansible_host`}}", "{{.Image}}", "/bin/bash" ]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get -y update",
"apt-get -y install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/playbook.yml",
"playbook_dir": "ansible",
"role_paths": [
"ansible/roles/test"
]
}
]
}
This seems to be due to a bug in Packer. Everything works as expected with any Packer version other than 1.0.4. I recommend either downgrading to 1.0.3 or installing the yet to be released 1.1.0 version.
My best guess is that this is being caused by a known and fixed issue about how directories get copied by the docker builder when using Ansible local provisioner.

How to configure rabbitmq.config inside Docker containers?

I'm using the official RabbitMQ Docker image (https://hub.docker.com/_/rabbitmq/)
I've tried editing the rabbitmq.config file inside the container after running
docker exec -it <container-id> /bin/bash
However, this seems to have no effect on the rabbitmq server running in the container. Restarting the container obviously didn't help either since Docker starts a completely new instance.
So I assumed that the only way to configure rabbitmq.config for a Docker container was to set it up before the container starts running, which I was able to partly do using the image's supported environment variables.
Unfortunately, not all configuration options are supported by environment variables. For instance, I want to set {auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']} in rabbitmq.config.
I then found the RABBITMQ_CONFIG_FILE environment variable, which should allow me to point to the file I want to use as my conifg file. However, I've tried the following with no luck:
docker service create --name rabbitmq --network rabbitnet \
-e RABBITMQ_ERLANG_COOKIE='mycookie' --hostname = "{{Service.Name}}{{.Task.Slot}}" \
--mount type=bind,source=/root/mounted,destination=/root \
-e RABBITMQ_CONFIG_FILE=/root/rabbitmq.config rabbitmq
The default rabbitmq.config file containing:
[ { rabbit, [ { loopback_users, [ ] } ] } ]
is what's in the container once it starts
What's the best way to configure rabbitmq.config inside Docker containers?
the config file lives in /etc/rabbitmq/rabbitmq.config so if you mount your own config file with something like this (I'm using docker-compose here to setup the image)
volumes:
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
that should do it.
In case you are having issues that the configuration file get's created as directory, try absolute paths.
I'm able to run RabbitMQ with a mounted config using the following bash script:
#RabbitMQ props
env=dev
rabbitmq_name=dev_rabbitmq
rabbitmq_port=5672
#RabbitMQ container
if [ "$(docker ps -aq -f name=${rabbitmq_name})" ]; then
echo Cleanup the existed ${rabbitmq_name} container
docker stop ${rabbitmq_name} && docker rm ${rabbitmq_name}
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
else
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
fi
I also have the following config files in my rabbitmq/dev dir
definitions.json
{
"rabbit_version": "3.7.3",
"users": [{
"name": "welib",
"password_hash": "su55YoHBYdenGuMVUvMERIyUAqJoBKeknxYsGcixXf/C4rMp",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": ""
}, {
"name": "admin",
"password_hash": "x5RW/n1lq35QfY7jbJaUI+lgJsZp2Ioh6P8CGkPgW3sM2/86",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}],
"vhosts": [{
"name": "/"
}, {
"name": "dev"
}],
"permissions": [{
"user": "welib",
"vhost": "dev",
"configure": ".*",
"write": ".*",
"read": ".*"
}, {
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}],
"topic_permissions": [],
"parameters": [],
"global_parameters": [{
"name": "cluster_name",
"value": "rabbit#98c821300e49"
}],
"policies": [],
"queues": [],
"exchanges": [],
"bindings": []
}
rabbitmq.config
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "/opt/definitions.json"}
]}
].

Packer shell provisioning hangs when building Docker container

I'm trying to build and provision a docker container, but when I run this it hangs when running the provisioning script.
I'm running on a OSX using:
Boot2Docker-cli version: v1.3.1
Packer v0.7.2
**docker version**
Client version: 1.3.1
Client API version: 1.15
Server version: 1.3.1
Server API version: 1.15
Running this:
packer build ./packer-build-templates/docker/testsite/testsite.json
packer-build-templates/docker/testsite/testsite.json
{
"builders": [
{
"type": "docker",
"image": "centos:centos6",
"commit": "true"
}
],
"provisioners": [
{
"type": "shell",
"script": "script.sh"
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "test/stuff",
"tag": "latest"
}
]
]
}
script.sh
#!/bin/sh -x
echo foo
Output:
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: centos:centos6
docker: centos:centos6: The image you are pulling has been verified
docker: Status: Image is up to date for centos:centos6
==> docker: Starting docker container...
docker: Run command: docker run -v /var/folders/z2/nm_4_yyx2ss9z8wn4h0bfd1jw_pj8j/T/packer-docker208208953:/packer-files -d -i -t centos:centos6 /bin/bash
docker: Container ID: 3ab21c7c21bc4af84e0f0c7bdbac91ee600d1ea0a469bfa51a959faba73fa7e4
==> docker: Provisioning with shell script: script.sh
This is as far as it gets. Then it just sits there. Any idea what's going on here?

Resources