Docker Image Not Loading on Localhost. Network, etc - docker

I have been having really hard time figuring out the problem with my docker image not loading on localhost. I am using Docker for Windows 7. Following is some information regarding my docker image, network, etc, if it can be any helpful in proposing solutions.
I have EXPOSE 80 in my docker file. and listen 8080 in my shinyserver.conf file. I run my image by typing docker run -p 80:8080 imagename. It goes back to command line. Then I check http://localhost or http://localhost:8080, or http://172.17.0.1 or http://172.17.0.1:8080 or http://192.168.99.100 and no image shows up.
When I run docker ps there is no running container (makes sense, since image is not running anyway on localhost). All exited containers show with docker ps -a. There some containers that I haven't even created by show (i guess it comes with creating images).
I run docker inspect to find the IP address and here is what I get (there is no IP on fitfarmz which is my imagename)
$ docker inspect -f '{{.Name}} - {{.NetworkSettings.IPAddress }}' $(docker ps -aq)
/mystifying_johnson -
/wonderful_newton -
/flamboyant_brown -
/gallant_austin -
/fitfarmz-2 -
/fitfarmz -
/amazing_easley -
/nervous_lewin -
/silly_wiles -
/lucid_hopper -
/quizzical_kirch -
/boring_gates -
/clever_booth -
/determined_mestorf -
/pedantic_wozniak -
/goofy_goldstine -
/sharp_ardinghelli -
/xenodochial_lamport -
/keen_panini -
/blissful_lamarr -
/suspicious_boyd -
/confident_hodgkin -
/vigorous_lewin - 172.17.0.2
/quirky_khorana -
/agitated_knuth -
I also got info on docker network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "9105bd8d679ad2e7d814781b4fa99f375cff3a99a047e70ef63e463c35c5ae28"
,
"Created": "2018-09-08T22:22:51.075519268Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Here is the dockerfile:
FROM r-base:3.5.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Add shiny user
RUN groupadd shiny \
&& useradd --gid shiny --shell /bin/bash --create-home shiny
# Download and install ShinyServer
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb && \
gdebi shiny-server-1.5.7.907-amd64.deb
# Install R packages that are required
RUN R -e "install.packages(c('Benchmarking', 'plotly', 'DT'), repos='http://cran.rstudio.com/')"
RUN R -e "install.packages('shiny', repos='https://cloud.r-project.org/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
And shinyserver.conf:
# Define the user we should use when spawning R Shiny processes
run_as shiny;
# Define a top-level server which will listen on a port
server {
# Instruct this server to listen on port 80. The app at dokku-alt need expose PORT 80, or 500 e etc. See the docs
listen 8080;
# Define the location available at the base URL
location / {
# Run this location in 'site_dir' mode, which hosts the entire directory
# tree at '/srv/shiny-server'
site_dir /srv/shiny-server;
# Define where we should put the log files for this location
log_dir /var/log/shiny-server;
# Should we list the contents of a (non-Shiny-App) directory when the user
# visits the corresponding URL?
directory_index on;
}
}
Here is the shiny-server.sh file:
mkdir -p /var/log/shiny-server
chown shiny.shiny /var/log/shiny-server
exec shiny-server >> /var/log/shiny-server.log 2>&1
Any ideas what is going wrong? I use the command: docker run -p 80:8080 fitfarmz to run the image.

Related

Docker container exits (code 255) with error "task already exists" and does not restart automatically

I have a basic container that opens up a ssh tunnel to a machine.
Recently I noticed the container has exited with error code 255 with an error message saying the task already exists:
"Id": "7eb92418992a1a1c3e44d6b47257dc503d4fa4d0f26050956533d617ac369479",
"Created": "2022-08-29T18:19:41.286843867Z",
"Path": "sh",
"Args": [
"-c",
"apk update && apk add openssh-client &&\n chmod 400 ~/.ssh/abc.pem\n while true; do \n exec ssh -o StrictHostKeyChecking=no -i ~/.ssh/abc.pem -nNT -L *:33333:localhost:5001 abc#192.168.1.1; \n done"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 255,
"Error": "task 7eb92418992a1a1c3e44d6b47257dc503d4fa4d0f26050956533d617ac369479: already exists",
"StartedAt": "2022-08-30T19:43:58.575463029Z",
"FinishedAt": "2022-08-30T19:51:23.511624168Z"
},
More importantly even though the restart policy is always, the docker engine did not start the container after the container exit.
abc:
container_name: abc
image: alpine:latest
restart: always
command: >
sh -c "apk update && apk add openssh-client &&
chmod 400 ~/.ssh/${PEM_FILENAME}
while true; do
exec ssh -o StrictHostKeyChecking=no -i ~/.ssh/${PEM_FILENAME} -nNT -L *:33333:localhost:5001 abc#${IP};
done"
volumes:
- ./ssh:/root/.ssh:rw
expose:
- 33333
Does anyone know under what situation error task already exists can happen?
Also any idea why docker engine did not start the container after exit?
Update 1:
Also any idea why docker engine did not start the container after exit? [Answered by #Mihai]
According to Restart policy details:
A restart policy only takes effect after a container starts
successfully. In this case, starting successfully means that the
container is up for at least 10 seconds and Docker has started
monitoring it. This prevents a container which does not start at all from going into a restart loop.
Sine we have:
"StartedAt": "2022-08-30T19:43:58.575463029Z",
"FinishedAt": "2022-08-30T19:51:23.511624168Z"
then FinishedAt - StartedAt ~ 8 seconds < 10 seconds that's why docker engine is not restarting the container. Which I think it is not a good logic. docker engine should have a retry mechanism to retry for instance at least 3 times before giving up.
I would suggest this solution:
create Dockerfile in an empty folder as:
FROM alpine:latest
RUN apk update && apk add openssh-client
build the image:
docker build -t alpinessh .
Run it with docker run:
docker run -d \
--restart "always" \
--name alpine_ssh \
-u $(id -u):$(id -g) \
-v $HOME/.ssh:/user/.ssh \
-p 33333:33333 \
alpinessh \
ssh -o StrictHostKeyChecking=no -i /user/.ssh/${PEM_FILENAME} -nNT -L :33333:localhost:5001 abc#${IP}
(make sure to set the env variables that you need)
Running with docker-compose follows the same logic.
** NOTE **
Mapping ~/.ssh inside the container is not the best of ideas. It would be better to copy the key to a different location and use it from there. Reason is: inside the container you are root and any files created in your ~/.ssh by the container would be created/accessed by root (uid=0). For example known_hosts - if you don't already have one, you will get a fresh new one owned by root.
For this reason I am running the container as the current UID:GID on the host.

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

Login to alpine based cachet container using SSH

I am working on implementing cachet status page service and have a requirement to login to the cachet container using SSH.
Cachet uses nginx:1.17.8-alpine as base image and I added the below section to the existing Dockerfile to install and setup SSH. I replaced CMD ["/sbin/entrypoint.sh"] to ENTRYPOINT ["/sbin/entrypoint.sh"] to accomadate the CMD command for starting sshd.
RUN apk --update add --no-cache openrc openssh \
&& sed -i s/#PermitRootLogin.*/PermitRootLogin\ yes/ /etc/ssh/sshd_config \
&& echo "root:root" | chpasswd \
&& rm -rf /var/cache/apk/* \
&& sed -ie 's/#Port 22/Port 22/g' /etc/ssh/sshd_config \
&& sed -ri 's/#HostKey \/etc\/ssh\/ssh_host_key/HostKey \/etc\/ssh\/ssh_host_key/g' /etc/ssh/sshd_config \
&& sed -ir 's/#HostKey \/etc\/ssh\/ssh_host_rsa_key/HostKey \/etc\/ssh\/ssh_host_rsa_key/g' /etc/ssh/sshd_config \
&& sed -ir 's/#HostKey \/etc\/ssh\/ssh_host_dsa_key/HostKey \/etc\/ssh\/ssh_host_dsa_key/g' /etc/ssh/sshd_config \
&& sed -ir 's/#HostKey \/etc\/ssh\/ssh_host_ecdsa_key/HostKey \/etc\/ssh\/ssh_host_ecdsa_key/g' /etc/ssh/sshd_config \
&& sed -ir 's/#HostKey \/etc\/ssh\/ssh_host_ed25519_key/HostKey \/etc\/ssh\/ssh_host_ed25519_key/g' /etc/ssh/sshd_config \
&& /usr/bin/ssh-keygen -A \
&& ssh-keygen -t rsa -b 4096 -f /etc/ssh/ssh_host_key
RUN rc-update add sshd
CMD ["/usr/sbin/sshd","-D"]
EXPOSE 22
Also added 22 to the ports section in the docker compose file as below:
cachet:
build:
context: .
args:
- cachet_ver=2.4
ports:
- 80:8000
- 22
After docker-compose build and up, I checked the docker info and am able to see a port being assigned to 22 as below
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e17c53db8acf117d93dcc36d9e35f317433e877ea3ac3962f0d28c3403a9cdb0",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32955"
}
],
"80/tcp": null,
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "80"
}
The IP address of the container from docker inspect is as below:
"Networks": {
"cachet-docker_default": {
"IPAMConfig": null,
"Links": [
"cachet-docker_postgres_1:cachet-docker_postgres_1",
"cachet-docker_postgres_1:postgres",
"cachet-docker_postgres_1:postgres_1"
],
"Aliases": [
"e8be9717eb77",
"cachet"
],
"NetworkID": "44521f9cd8b2d17c46a80487ad20e8b27e377c99193ef21d24c0d1c2f70ff451",
"EndpointID": "e735db62d60647687f066bb973243d9ddb4ba20dfd097344f8a487ece37256b5",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.3",
But when I try to SSH into the container I get the below error
cachet-docker$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e8be9717eb77 cachet-docker_cachet "/sbin/entrypoint.sh…" 21 hours ago Up 21 hours 80/tcp, 0.0.0.0:32955->22/tcp, 0.0.0.0:80->8000/tcp cachet-docker_cachet_1
c8000b2b64ec postgres:12-alpine "docker-entrypoint.s…" 21 hours ago Up 21 hours 5432/tcp cachet-docker_postgres_1
cachet-docker$ ssh root#172.18.0.3 -p 32955
ssh: connect to host 172.18.0.3 port 32955: Operation timed out
cachet-docker$ ssh root#localhost -p 32955
ssh_exchange_identification: Connection closed by remote host
cachet-docker$ ssh root#localhost
ssh: connect to host localhost port 22: Connection refused
cachet-docker$ ssh root#172.18.0.3
ssh: connect to host 172.18.0.3 port 22: Operation timed out
Before this I tried generating ssh keys and then add config to Dockerfile for copying the public key into the container by creating a ~/.ssh folder and authorized_key file then copying the public key into the file and assigning proper file permissions., But that too did not work and gave me the same error as above.
Would you pleas help me understand where I am going wrong?

Docker container communication on the same net not work

I have 3 different docker container (on windows 10) on the same network (core_net), but when use curl on backend
-curl localhost:7000
the response is:
"curl: (7) Failed to connect to localhost port 7000: Connection refused"
Why?
Docker commands:
Frontend:
docker run -itd --name dam --net=core_net -p 3000:3000 DAM
Backend:
docker run -itd --name core --net=core_net -p 6000:6000 -p 7000:7000 -p 8000:8000 -p 9000:9000 SG
Database:
docker run --name mongodb -p 27017:27017 -d mongo:3
These is the dockerfile:
Frontend:
FROM node:4.5.0
# Create app directory
RUN mkdir -p /DAM
WORKDIR /DAM
# Install app dependencies
COPY package.json /DAM
RUN npm install
RUN npm install gulp -g
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install bower -g
# Bundle app source
COPY . /DAM
ENV PORT 3000 3001
EXPOSE $PORT
CMD ["gulp", "serve"]
and
Backend:
FROM node:4.5.0
RUN npm install nodemon -g
# Create app directory
RUN mkdir -p /SG
WORKDIR /SG
# Install app dependencies
COPY package.json /SG
RUN npm install
# Bundle app source
COPY . /SG
ENV PORT 6000 7000 8000 9000
EXPOSE $PORT
CMD ["npm", "start"]
Inside container the ping works and the inspect is this:
$ docker network inspect core_net
{
"Name": "core_net",
"Id": "1f9e5426abe397d520360c05c95fee46fe08c98fe5c474c8b52764e491ea23e7",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Containers": {
"3d3b8780fba2090b1c2feaddf2e035624529cf5474ad4e6332fe7071c0acbd25": {
"Name": "core",
"EndpointID": "f0a6882e690cf5a7deedfe57ac9b941d239867e3cd58cbdf0ca8a8ee216d53a9",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"bb6a6642b3a7ab778969f2e00759d3709bdca643cc03f5321beb9b547b574466": {
"Name": "dam",
"EndpointID": "b42b802e219441f833d24971f1e1ea74e093f56e28126a3472a44750c847daa4",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"cf8dd2018f58987443ff93b1e84fc54b06443b17c7636c7f3b4685948961ba3f": {
"Name": "mongodb",
"EndpointID": "be02d784cbd46261b7a53d642102887cafa0f880c8fe08086b9cc026971ea1be",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Commnication between mongodb and core work but between dam and core not work.
what's the problem?
To connect to another container with in an network you can not use localhost, but cou can use the name of the conatiner you want to reach. e.g. curl core:7000
To use localhost the conatiners have to share their network stack. You can do that with --network container:core
And if you dont have to reach the backend from outside of docker, it is enought to only expose the ports, and not to publish them

Docker Compose does not bind ports

I have the following Dockerfile for my container:
FROM centos:centos7
# Install software
RUN yum -y update && yum clean all
RUN yum install -y tar gzip wget && yum clean all
# Install io.js
RUN mkdir /root/iojs
RUN wget https://iojs.org/dist/v1.1.0/iojs-v1.1.0-linux-x64.tar.gz
RUN tar -zxvf iojs-v1.1.0-linux-x64.tar.gz -C /root/iojs
RUN rm -f iojs-v1.1.0-linux-x64.tar.gz
# add io.js to path
RUN echo "PATH=$PATH:/root/iojs/iojs-v1.1.0-linux-x64/bin" >> /root/.bashrc
# go to /src
WORKDIR /src
CMD /bin/bash
I build this container and start the image with docker run -i -t -p 8080:8080 -v /srv/source:/usr/src/app -w /usr/src/app --rm iojs-dev bash. Docker binds the port 8080 to the host port 8080, so that I can access the iojs-application from my client. Everything works fine.
Now I want to start my container with docker-compose, using the following docker-compose.yml
webfrontend:
image: iojs-dev
links:
- db
command: bash -c "iojs test.js"
ports:
- "127.0.0.1:8080:8080"
volumes:
- /srv/source:/usr/src/app
- /logs:/logs
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: 12345
When I now run docker-compose run webfrontend bash I can not access the port 8080 on my host. No port was binded. The result of docker ports is empty and also the result of docker inspect is empty at the port settings:
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.51",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:33",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"HostConfig": {
"Binds": [
"/srv/source:/usr/src/app:rw",
"/logs:/logs:rw"
],
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": null,
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": [
"/docker_db_1:/docker_webfrontend_run_34/db",
"/docker_db_1:/docker_webfrontend_run_34/db_1",
"/docker_db_1:/docker_webfrontend_run_34/docker_db_1"
],
"LxcConf": null,
"NetworkMode": "bridge",
"PortBindings": null,
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": []
},
This is intentional behavior for docker-compose run, as per documentation:
When using run, there are two differences from bringing up a container normally:
...
by default no ports will be created in case they collide with already opened ports.
One way to overcome this is to use up instead of run, which:
Builds, (re)creates, starts, and attaches to containers for a service.
Another way, if you're using version 1.1.0 or newer, is to pass the --service-ports option:
Run command with the service's ports enabled and mapped to the host.
P.S. Tried editing the original answer, got rejected, twice. Stay classy, SO.
This is intentional behavior for fig run.
Run a one-off command on a service.
One-off commands are started in new containers with the same config as a normal container for that service, so volumes, links, etc will all be created as expected. The only thing different to a normal container is the command will be overridden with the one specified and no ports will be created in case they collide.
source.
fig up is probably the command you're looking for, it will (re)create all containers based on your fig.yml and start them.

Resources