I have Ansible script to build docker image:
---
- hosts: localhost
tasks:
- name: build docker image
docker_image:
name: bionic
path: /
force: yes
tags: deb
and Dockerfile:
FROM ubuntu:bionic
RUN export DEBIAN_FRONTEND=noninteractive; \
apt-get -qq update && \
apt-get -qq install \
software-properties-common git curl wget openjdk-8-jre-headless debhelper devscripts
WORKDIR /workspace
when I run next command: ansible-playbook build.yml -vvv
I received next exception.
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/devpc/.ansible/tmp/ansible-tmp-1633701673.999151-517949-133730725910177/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"archive_path": null,
"build": null,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"force": true,
"force_absent": false,
"force_source": false,
"force_tag": false,
"load_path": null,
"name": "xroad-deb-bionic",
"path": "/",
"pull": null,
"push": false,
"repository": null,
"source": null,
"ssl_version": null,
"state": "present",
"tag": "latest",
"timeout": 60,
"tls": false,
"tls_hostname": null,
"use_ssh_client": false,
"validate_certs": false
}
},
"msg": "state is present but all of the following are missing: source"
}
Could you please give me a hint how to debug and understand what this error mean?
Thanks for your time and consideration.
The error says a source: key must be present in the docker_image: block.
More specifically, most things in Ansible default to state: present. When you request a docker_image to be present, there are a couple of ways to get it (pulling it from a registry, building it from source, unpacking it from a tar file). Which way to do this is specified by the source: control, but Ansible does not have a default value for this.
If you're building an image, you need to specify source: build. Having done that, there are also a set of controls under build:. In particular, the path to the Docker image context (probably not /) goes there and not directly under docker_image:.
This leaves you with something like:
---
- hosts: localhost
tasks:
- name: build docker image
docker_image:
name: bionic
source: build # <-- add
build: # <-- add
path: /home/user/src/something # <-- move under build:
force: yes
tags: deb
Related
I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.
I'm currently trying to build a Docker image using GitHub Actions (CI). I can successfully build it on my machine and multiple other x86_64 architectures which I believe GitHub Actions also runs, but when building, I experience the following issue:
standard_init_linux.go:219: exec user process caused: exec format error
The command '/bin/sh -c apt-get update && apt-get install -y build-essential psmisc ifupdown omxplayer x11-xserver-utils xserver-xorg libraspberrypi0 libraspberrypi-dev raspberrypi-kernel-headers cec-utils libpng12-dev git-core wget --no-install-recommends && apt-get clean && rm -rf /var/lib/apt/*' returned a non-zero code: 1
I've searched multiple other threads here, but I wasn't able to find anything useful and I'm not quite sure what else to try. Any help or suggestions would be much appreciated.
Relevant Files:
This is the full build log
This is the Dockerfile
This is the CI file
This is the full repository
Your base image is invalid for amd64:
$ docker image inspect balenalib/raspberry-pi-debian-node:latest-jessie
...
"Architecture": "amd64",
...
$ docker run -it --rm balenalib/raspberry-pi-debian-node:latest-jessie /bin/bash
...
root#2eb37d8359ed:/# dpkg --print-architecture
armhf
That base image won't run on systems without qemu's binfmt_misc configured to run binaries for other platforms.
It's actually not a multi-platform base image at all, and instead is only designed to run on systems with qemu setup (note the media type is a manifest and not a manifest list):
$ regctl image manifest --list balenalib/raspberry-pi-debian-node:latest-jessie
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 11726,
"digest": "sha256:5ec0839ecb046f260ad72751d0c4b08c7a085b147a519619e5a54876643a3231"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 40222636,
"digest": "sha256:d84b7435af12678c551b7489227b74c994981386b5bc4875ec512e11f28249c5"
},
And the image configuration has more pointers to qemu:
$ regctl image inspect balenalib/raspberry-pi-debian-node:latest-jessie
{
"created": "2019-05-02T22:50:58.241895826Z",
"architecture": "amd64",
"os": "linux",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL=C.UTF-8",
"DEBIAN_FRONTEND=noninteractive",
"UDEV=off",
"QEMU_CPU=arm1176",
"NODE_VERSION=11.14.0",
"YARN_VERSION=1.12.3"
],
This won't work on hosts without qemu's binfmt-misc setup. For building within a github action, you can use the setup qemu action:
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action#v1
with:
image: tonistiigi/binfmt:latest
platforms: all
I would like to copy docker container inside docker registry with Jenkins.
When I execute Ansible playbook i get :
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
I suppose that ansible is run under user jenkins because this link, and because of the log file:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
Because the ansible playbook try to do a docker_login, I understand that user jenkins need to be able to connect to docker.
So I add jenkins to a docker users :
I don't understand why the permission is denied
The whole log jenkins file:
TASK [Log into Docker registry]
************************************************
task path: /var/jenkins_home/workspace/.../build_docker.yml:8
Using module file /usr/lib/python2.7/dist-
packages/ansible/modules/core/cloud/docker/docker_login.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" && echo ansible-tmp-1543388409.78-179785864196502="` echo ~/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFASoHo TO /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/ /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/docker_login.py; rm -rf "/var/jenkins_home/.ansible/tmp/ansible-tmp-1543388409.78-179785864196502/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"cacert_path": null,
"cert_path": null,
"config_path": "~/.docker/config.json",
"debug": false,
"docker_host": null,
"email": null,
"filter_logger": false,
"key_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "https://registry.docker....si",
"ssl_version": null,
"timeout": null,
"tls": null,
"tls_hostname": null,
"tls_verify": null,
"username": "jenkins"
},
"module_name": "docker_login"
},
"msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"
}
to retry, use: --limit #/var/jenkins_home/workspace/.../build_docker.retry
The whole ansible playbook
---
- hosts: localhost
vars:
git_branch: "{{ GIT_BRANCH|default('development') }}"
tasks:
- name: Log into Docker registry
docker_login:
registry_url: https://registry.docker.....si
username: ...
password: ....
If anyone have the same problem I found the solution,...
My registry doesn't have valid HTTPS certificate. So, you need to add
{
"insecure-registries" : [ "https://registry.docker.....si" ]
}
inside /etc/docker/daemon.json
I want to get some basic information about the published versions/tags of a docker image, to know what image:tag's I can pull. I would also like to see the time that each tag was most recently published.
Is there a way to do this on the command line?
Docker version 1.10.2, build c3959b1
Basically looking for the equivalent of npm info {pkg} for a docker image.
Not from the command line. You have docker search but it only returns a subset of the data you want, and only for the image with the :latest tag:
> docker search sixeyed/hadoop-dotnet
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
sixeyed/hadoop-dotnet Hadoop with .NET Core installed 1 [OK]
If you want more detail, you'll need to use the registry API, but it only has a catalog endpoint for listing repositories, the issue for search is still open.
Assuming you know the repository name, you can navigate the API - first you need an auth token:
> curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:sixeyed/hadoop-dotnet:pull"
{"token":"eyJhbG...
Then you pass the token to subsequent requests, e.g. to list the tags:
> curl --header "Authorization: Bearer eyJh..." https://index.docker.io/v2/sixeyed/hadoop-dotnet/tags/list
{"name":"sixeyed/hadoop-dotnet","tags":["2.7.2","latest"]}
And then get all the information about one image by its repository name and tag:
> curl --header "Authorization: Bearer eyJh..." https://index.docker.io/v2/sixeyed/hadoop-dotnet/manifests/latest
I want to get some basic information about the published versions/tags of a docker image, to know what image:tag's I can pull.
This is from the tags/list API. Here's a small script that does it:
#!/bin/sh
ref="${1:-library/ubuntu:latest}"
repo="${ref%:*}"
tag="${ref##*:}"
token=$(curl -s "https://auth.docker.io/token?service=registry.docker.io&scope=repository:${repo}:pull" \
| jq -r '.token')
curl -H "Authorization: Bearer $token" \
-s "https://registry-1.docker.io/v2/${repo}/tags/list" | jq .
Running that looks like:
$ ./tags-v2.sh library/ubuntu:latest
{
"name": "library/ubuntu",
"tags": [
"10.04",
"12.04",
"12.04.5",
"12.10",
"13.04",
"13.10",
"14.04",
"14.04.1",
...
I would also like to see the time that each tag was most recently published.
You likely want to pull the image config for this. First you need to pull the manifest, parse the config descriptor, then pull that blob for the config json. Here's a script that will do that for docker images:
#!/bin/sh
ref="${1:-library/ubuntu:latest}"
repo="${ref%:*}"
tag="${ref##*:}"
token=$(curl -s "https://auth.docker.io/token?service=registry.docker.io&scope=repository:${repo}:pull" \
| jq -r '.token')
digest=$(curl -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
-H "Authorization: Bearer $token" \
-s "https://registry-1.docker.io/v2/${repo}/manifests/${tag}" \
| jq -r .config.digest)
curl -H "Accept: application/vnd.docker.container.image.v1+json" \
-H "Authorization: Bearer $token" \
-s -L "https://registry-1.docker.io/v2/${repo}/blobs/${digest}" | jq .
And an example of running that script:
$ ./get-config-v2.sh library/ubuntu:latest
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"bash"
],
"Image": "sha256:6c18a628d47eacf574eb93da2324293a0e6c845084cca2ea13efaa3cee4d0799",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"container": "249e88be79ad9986a479c71c15a056946ae26b0c54c1f634f115be6d5f9ba1c8",
"container_config": {
"Hostname": "249e88be79ad",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"bash\"]"
],
"Image": "sha256:6c18a628d47eacf574eb93da2324293a0e6c845084cca2ea13efaa3cee4d0799",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2021-10-16T00:37:47.578710012Z",
"docker_version": "20.10.7",
"history": [
{
"created": "2021-10-16T00:37:47.226745473Z",
"created_by": "/bin/sh -c #(nop) ADD file:5d68d27cc15a80653c93d3a0b262a28112d47a46326ff5fc2dfbf7fa3b9a0ce8 in / "
},
{
"created": "2021-10-16T00:37:47.578710012Z",
"created_by": "/bin/sh -c #(nop) CMD [\"bash\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:9f54eef412758095c8079ac465d494a2872e02e90bf1fb5f12a1641c0d1bb78b"
]
}
}
Note that this will only give you details from the build tooling, the registry itself does not track when an image was uploaded to that registry from any OCI APIs (some registries may add their own metadata, but this would be registry specific).
To handle more authentication types, different kinds of images (OCI vs Docker), and similar, I've packaged these commands and more into regctl in the regclient project. Similar projects exist from Google's container registry crane command, and RedHat's skopeo, each giving access to registries from the command line:
$ regctl tag ls ubuntu
10.04
12.04
12.04.5
12.10
13.04
13.10
14.04
14.04.1
14.04.2
14.04.3
14.04.4
14.04.5
...
$ regctl image config ubuntu
{
"created": "2021-10-16T00:37:47.578710012Z",
"architecture": "amd64",
"os": "linux",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"bash"
]
},
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:9f54eef412758095c8079ac465d494a2872e02e90bf1fb5f12a1641c0d1bb78b"
]
},
"history": [
{
"created": "2021-10-16T00:37:47.226745473Z",
"created_by": "/bin/sh -c #(nop) ADD file:5d68d27cc15a80653c93d3a0b262a28112d47a46326ff5fc2dfbf7fa3b9a0ce8 in / "
},
{
"created": "2021-10-16T00:37:47.578710012Z",
"created_by": "/bin/sh -c #(nop) CMD [\"bash\"]",
"empty_layer": true
}
]
}
If you don't want to use a token:
curl -L -s 'https://registry.hub.docker.com/v2/repositories/<$repo_name>/tags?page=<$page_nb>&page_size=<$page_size>' | jq '."results"[]["name"]'
Where you can get <$repo_name> with:
docker search <$expression>
For example:
$ docker search piwigo
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
linuxserver/piwigo A Piwigo container, brought to you by LinuxS… 152
mathieuruellan/piwigo Easy deployment of piwigo for my personal us… 19 [OK]
lsioarmhf/piwigo 3
hg8496/piwigo 2 [OK]
[…]
$ curl -L -s 'https://registry.hub.docker.com/v2/repositories/linuxserver/piwigo/tags?page=1&page_size=10' | jq '."results"[]["name"]'
"latest"
"12.0.0"
"version-12.0.0"
"12.0.0-ls137"
"arm64v8-12.0.0"
"arm32v7-12.0.0"
"amd64-12.0.0"
"arm64v8-version-12.0.0"
"arm32v7-version-12.0.0"
"amd64-version-12.0.0"
Source: Démarrer avec les containers Docker
I have the following Dockerfile for my container:
FROM centos:centos7
# Install software
RUN yum -y update && yum clean all
RUN yum install -y tar gzip wget && yum clean all
# Install io.js
RUN mkdir /root/iojs
RUN wget https://iojs.org/dist/v1.1.0/iojs-v1.1.0-linux-x64.tar.gz
RUN tar -zxvf iojs-v1.1.0-linux-x64.tar.gz -C /root/iojs
RUN rm -f iojs-v1.1.0-linux-x64.tar.gz
# add io.js to path
RUN echo "PATH=$PATH:/root/iojs/iojs-v1.1.0-linux-x64/bin" >> /root/.bashrc
# go to /src
WORKDIR /src
CMD /bin/bash
I build this container and start the image with docker run -i -t -p 8080:8080 -v /srv/source:/usr/src/app -w /usr/src/app --rm iojs-dev bash. Docker binds the port 8080 to the host port 8080, so that I can access the iojs-application from my client. Everything works fine.
Now I want to start my container with docker-compose, using the following docker-compose.yml
webfrontend:
image: iojs-dev
links:
- db
command: bash -c "iojs test.js"
ports:
- "127.0.0.1:8080:8080"
volumes:
- /srv/source:/usr/src/app
- /logs:/logs
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: 12345
When I now run docker-compose run webfrontend bash I can not access the port 8080 on my host. No port was binded. The result of docker ports is empty and also the result of docker inspect is empty at the port settings:
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.51",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:33",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"HostConfig": {
"Binds": [
"/srv/source:/usr/src/app:rw",
"/logs:/logs:rw"
],
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": null,
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": [
"/docker_db_1:/docker_webfrontend_run_34/db",
"/docker_db_1:/docker_webfrontend_run_34/db_1",
"/docker_db_1:/docker_webfrontend_run_34/docker_db_1"
],
"LxcConf": null,
"NetworkMode": "bridge",
"PortBindings": null,
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": []
},
This is intentional behavior for docker-compose run, as per documentation:
When using run, there are two differences from bringing up a container normally:
...
by default no ports will be created in case they collide with already opened ports.
One way to overcome this is to use up instead of run, which:
Builds, (re)creates, starts, and attaches to containers for a service.
Another way, if you're using version 1.1.0 or newer, is to pass the --service-ports option:
Run command with the service's ports enabled and mapped to the host.
P.S. Tried editing the original answer, got rejected, twice. Stay classy, SO.
This is intentional behavior for fig run.
Run a one-off command on a service.
One-off commands are started in new containers with the same config as a normal container for that service, so volumes, links, etc will all be created as expected. The only thing different to a normal container is the command will be overridden with the one specified and no ports will be created in case they collide.
source.
fig up is probably the command you're looking for, it will (re)create all containers based on your fig.yml and start them.