Packer shell provisioning hangs when building Docker container - docker

I'm trying to build and provision a docker container, but when I run this it hangs when running the provisioning script.
I'm running on a OSX using:
Boot2Docker-cli version: v1.3.1
Packer v0.7.2
**docker version**
Client version: 1.3.1
Client API version: 1.15
Server version: 1.3.1
Server API version: 1.15
Running this:
packer build ./packer-build-templates/docker/testsite/testsite.json
packer-build-templates/docker/testsite/testsite.json
{
"builders": [
{
"type": "docker",
"image": "centos:centos6",
"commit": "true"
}
],
"provisioners": [
{
"type": "shell",
"script": "script.sh"
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "test/stuff",
"tag": "latest"
}
]
]
}
script.sh
#!/bin/sh -x
echo foo
Output:
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: centos:centos6
docker: centos:centos6: The image you are pulling has been verified
docker: Status: Image is up to date for centos:centos6
==> docker: Starting docker container...
docker: Run command: docker run -v /var/folders/z2/nm_4_yyx2ss9z8wn4h0bfd1jw_pj8j/T/packer-docker208208953:/packer-files -d -i -t centos:centos6 /bin/bash
docker: Container ID: 3ab21c7c21bc4af84e0f0c7bdbac91ee600d1ea0a469bfa51a959faba73fa7e4
==> docker: Provisioning with shell script: script.sh
This is as far as it gets. Then it just sits there. Any idea what's going on here?

Related

Nx - Run Commands - Docker

I'm currently trying to configure my workspace.json file to include the commands to build a docker image and push it. These commands work running them through the terminal (bash or windows), but I would like to get them setup in the cli so I can pass in the appName.
Project Structure
---/
Dockerfile.api
workspace.json
....
Configuration
{
"version": 1,
"projects": {
"api": {
"heroku-deploy": {
"builder": "#nrwl/workspace:run-commands",
"options": {
"commands": [
{
"command": "docker build -t registry.heroku.com/{args.appName}/web -f ./Dockerfile.api ."
},
{
"command": "docker push registry.heroku.com/{args.appName}/web"
},
{
"command": "heroku container:release web -a {args.appName}"
}
]
}
}
}
}
}
Running the command
nx run api:heroku-deploy2 --args="--appName=nx-api-leopard" --verbose
Error
#1 [internal] load build definition from Dockerfile.web
#1 sha256:8f9045a5ed51569cdcff9c6e9f9052e7e724435b6f2eeec087ea2770af2a3b0d
#1 transferring dockerfile: 2B done
#1 DONE 0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount178807147/Dockerfile.web: no such file or directory
Warning: #nrwl/run-commands command "docker build -t registry.heroku.com/my-nx-app/web -f ./Dockerfile.web ." exited with non-zero status code
Resources
NX Run Commands

vscode -- How to run `docker` in a task ? -- Docker-Build-Task does not work

Situation and Problem
I am running macOS Mojave 10.14.5, upgraded bash like described here and have a TeXlive docker container (basically that one), that I want to call to typeset LaTeX files. This does work very well and also execution with this following tasks.json worked flawlessly up unti some recent update (that I cannot pin down, as I am not using this daily).
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "runit",
"group": {
"kind": "build",
"isDefault": true
},
"command": "docker",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
"problemMatcher": []
},
{
"type": "shell",
"label": "test",
"command": "echo",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
}
]
}
Trying to run docker yields a "command not found" :
> Executing task: docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
/usr/local/bin/bash: docker: command not found
The terminal process command '/usr/local/bin/bash -c 'docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex'' failed to launch (exit code: 127)
... while trying to echo, works just fine.
> Executing task: echo run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex
Even though, it once worked just like described above and the very same command works in the terminal, it fails now if I execute it as a build-task. Hence, my
Question
How to use docker in a build-task ?
or fix the problem in the above set up.
additional notes
Trying the following yielded the same "command not found"
{
"type": "shell", "label": "test",
"command": "which", "args": ["docker"]
}
... even though this works:
bash$ /usr/local/bin/bash -c 'which docker'
/usr/local/bin/docker
bash$ echo $PATH
/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
edit: One more note:
I am using a context entry to start vscode with an automator script that runs the following bash command with the element 'right-clicked' passed as the variable:
#!/bin/sh
/usr/local/bin/code -n "$1"
So since there hasn't been any progress here and I got help on GitHub: I will just answer myself such that others led here searching for a solution won't be let down.
Please give all the acknowledgement to joaomoreno for his answer here
Turns out that by starting vscode via a context-entry there is some issue with an enviroment variable. Starting like this fixed that problem thus far:
#!/bin/sh
VSCODE_FORCE_USER_ENV=1 /usr/local/bin/code -n "$1"

Ansible not executing main.yml

I am using Ansible local inside a Packer script to configure a Docker image. I have a role test that has a main.yml file that's supposed to output some information and create a directory to see that the script actually ran. However, the main.yml doesn't seem to get run.
Here is my playbook.yml:
---
- name: apply configuration
hosts: all
remote_user: root
roles:
- test
test/tasks/main.yml:
---
- name: Test output
shell: echo 'testing output from test'
- name: Make test directory
file: path=/test state=directory owner=root
When running this via packer build packer.json I get the following output from the portion related to Ansible:
docker: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/playbook.yml --extra-vars "packer_build_name=docker packer_builder_type=docker packer_http_addr=" -c local -i /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/packer-provisioner-ansible-local037775056
docker:
docker: PLAY [apply configuration] *****************************************************
docker:
docker: TASK [setup] *******************************************************************
docker: ok: [127.0.0.1]
docker:
docker: PLAY RECAP *********************************************************************
docker: 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
I used to run a different more useful role this way and it worked fine. I hadn't run this for a few months and now it stopped working. Any ideas what I am doing wrong? Thank you!
EDIT:
here is my packer.json:
{
"builders": [
{
"type": "docker",
"image": "ubuntu:latest",
"commit": true,
"run_command": [ "-d", "-i", "-t", "--name", "{{user `ansible_host`}}", "{{.Image}}", "/bin/bash" ]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get -y update",
"apt-get -y install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/playbook.yml",
"playbook_dir": "ansible",
"role_paths": [
"ansible/roles/test"
]
}
]
}
This seems to be due to a bug in Packer. Everything works as expected with any Packer version other than 1.0.4. I recommend either downgrading to 1.0.3 or installing the yet to be released 1.1.0 version.
My best guess is that this is being caused by a known and fixed issue about how directories get copied by the docker builder when using Ansible local provisioner.

How to configure rabbitmq.config inside Docker containers?

I'm using the official RabbitMQ Docker image (https://hub.docker.com/_/rabbitmq/)
I've tried editing the rabbitmq.config file inside the container after running
docker exec -it <container-id> /bin/bash
However, this seems to have no effect on the rabbitmq server running in the container. Restarting the container obviously didn't help either since Docker starts a completely new instance.
So I assumed that the only way to configure rabbitmq.config for a Docker container was to set it up before the container starts running, which I was able to partly do using the image's supported environment variables.
Unfortunately, not all configuration options are supported by environment variables. For instance, I want to set {auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']} in rabbitmq.config.
I then found the RABBITMQ_CONFIG_FILE environment variable, which should allow me to point to the file I want to use as my conifg file. However, I've tried the following with no luck:
docker service create --name rabbitmq --network rabbitnet \
-e RABBITMQ_ERLANG_COOKIE='mycookie' --hostname = "{{Service.Name}}{{.Task.Slot}}" \
--mount type=bind,source=/root/mounted,destination=/root \
-e RABBITMQ_CONFIG_FILE=/root/rabbitmq.config rabbitmq
The default rabbitmq.config file containing:
[ { rabbit, [ { loopback_users, [ ] } ] } ]
is what's in the container once it starts
What's the best way to configure rabbitmq.config inside Docker containers?
the config file lives in /etc/rabbitmq/rabbitmq.config so if you mount your own config file with something like this (I'm using docker-compose here to setup the image)
volumes:
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
that should do it.
In case you are having issues that the configuration file get's created as directory, try absolute paths.
I'm able to run RabbitMQ with a mounted config using the following bash script:
#RabbitMQ props
env=dev
rabbitmq_name=dev_rabbitmq
rabbitmq_port=5672
#RabbitMQ container
if [ "$(docker ps -aq -f name=${rabbitmq_name})" ]; then
echo Cleanup the existed ${rabbitmq_name} container
docker stop ${rabbitmq_name} && docker rm ${rabbitmq_name}
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
else
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
fi
I also have the following config files in my rabbitmq/dev dir
definitions.json
{
"rabbit_version": "3.7.3",
"users": [{
"name": "welib",
"password_hash": "su55YoHBYdenGuMVUvMERIyUAqJoBKeknxYsGcixXf/C4rMp",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": ""
}, {
"name": "admin",
"password_hash": "x5RW/n1lq35QfY7jbJaUI+lgJsZp2Ioh6P8CGkPgW3sM2/86",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}],
"vhosts": [{
"name": "/"
}, {
"name": "dev"
}],
"permissions": [{
"user": "welib",
"vhost": "dev",
"configure": ".*",
"write": ".*",
"read": ".*"
}, {
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}],
"topic_permissions": [],
"parameters": [],
"global_parameters": [{
"name": "cluster_name",
"value": "rabbit#98c821300e49"
}],
"policies": [],
"queues": [],
"exchanges": [],
"bindings": []
}
rabbitmq.config
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "/opt/definitions.json"}
]}
].

Permission errors running jenkins inside docker using persistent volumes with marathon and mesos

I am trying to get jenkins running inside docker using marathon and mesos to lunch a jenkins docker image.
I used the create application button which produces the following json
{
"type": "DOCKER",
"volumes": [
{
"containerPath": "/var/jenkins_home",
"hostPath": "jenkins_home",
"mode": "RW"
},
{
"containerPath": "jenkins_home",
"mode": "RW",
"persistent": {
"size": 200
}
}
],
"docker": {
"image": "jenkins",
"network": "HOST",
"privileged": false,
"parameters": [],
"forcePullImage": false
}
}
stdout shows
--container="mesos-c8bd5b26-6e71-4e18-b490-821dbf7edd9d-S0.ac0b4dbb-10e4-4684-a4df-9539258d77ee" --docker="docker" --docker_socket="/var/run/docker.sock" --help="false" --initialize_driver_logging="true" --launcher_dir="/home/ajazam/mesos-0.28.0/build/src" --logbufsecs="0" --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" --quiet="false" --sandbox_directory="/var/lib/mesos/data/slaves/c8bd5b26-6e71-4e18-b490-821dbf7edd9d-S0/frameworks/6079a596-90a8-4fa5-9c92-9215558737d1-0000/executors/jenkins-t7.9be44260-f99c-11e5-b0ac-e4115bb26fcc/runs/ac0b4dbb-10e4-4684-a4df-9539258d77ee" --stop_timeout="0ns"
Registered docker executor on slave4
Starting task jenkins-t7.9be44260-f99c-11e5-b0ac-e4115bb26fcc
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
stderr shows
I0403 14:04:51.026866 6569 exec.cpp:143] Version: 0.28.0
I0403 14:04:51.032097 6585 exec.cpp:217] Executor registered on slave c8bd5b26-6e71-4e18-b490-821dbf7edd9d-S0
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
I am using
marathon 1.0.0 RC1
mesos 0.28.0
docker 1.10.3
OS is ubuntu 14.04.4 LTS
Does anybody have any pointers to where I'm going wrong? My feeling is that the problem is todo with the persistent volume and the mapping of it into the jenkins container.
I got it working.
git clone https://github.com/jenkinsci/docker.git on to your agent nodes. I've done it on all mine
insert # in front of lines 16 and 17 in Dockerfile e.g
# RUN groupadd -g ${gid} ${group} \
# && useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
run sudo docker build .
use sudo docker tag xyz jenkins to rename the repo to jenkins and then create an application using docker, jenkins and persistent volumes.

Resources