I'm new to docker and try to convert my actual web stack in it.
Currently I use this configuration:
varnish -> nginx -> php-fpm -> mysql
I have already convert php-fpm and nginx and now tries varnish.
When I run my image with a command, all is fine but when I put it in docker-compose my container restart indefinitely.
Command:
name="varnish"
cd $installDirectory/$name
docker build -t $name .
docker rm -f $(docker ps -a | grep $name | cut -d' ' -f1)
docker run -d -P --name $name \
-p 80:80 \
--link nginx:nginx \
-v /home/webstack/varnish/:/etc/varnish/ \
-t $name
My docker-compose.yml:
php-fpm:
restart: always
build: ./php-fpm
volumes:
- "/home/webstack/www/:/var/www/"
nginx:
restart: always
build: ./nginx
ports:
- "8080:8080"
volumes:
- "/home/webstack/nginx/:/etc/nginx/"
- "/home/webstack/log/:/var/log/nginx/"
- "/home/webstack/www/:/var/www/"
links:
- "php-fpm:php-fpm"
varnish:
restart: always
build: ./varnish
ports:
- "80:80"
volumes:
- "/home/webstack/varnish/:/etc/varnish/"
links:
- "nginx:nginx"
I have no result with docker logs webstack_varnish_1 and docker ps -a result show:
688c5aace1b3 webstack_varnish "/bin/bash" 16 seconds ago Restarting (0) 5 seconds ago 0.0.0.0:80->80/tcp
Here you can see my Dockerfile:
FROM debian:jessie
# Update apt sources
RUN apt-get -qq update
RUN apt-get install -y curl apt-transport-https
RUN sh -c "curl https://repo.varnish-cache.org/GPG-key.txt | apt-key add -"
RUN echo "deb https://repo.varnish-cache.org/debian/ jessie varnish-4.1" > /etc/apt/sources.list.d/varnish-cache.list
# Update the package repository
RUN apt-get -qq update
# Install varnish
RUN apt-get install -y varnish
# Expose port 80
EXPOSE 80
What I am doing wrong please?
Regards.
Your varnish Dockerfile seems to be missing ENTRYPOINT and/or CMD directives that would actually launch Varnish.
We have found the solution here :
https://github.com/docker/compose/issues/2563
I have to add tty: true to my varnish config.
Regards.
Related
I can access to the target with ssh password and with the private key from Jenkins bash, I configured SSH sites on jenkins with the same host, User and private key I get the next error:
Docker logs:
2022-09-23 05:06:52.357+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Auth fail 2022-09-23 05:06:52.367+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Can't connect to server
Docker-compose:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: fedora
dockerfile: Dockerfile
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=PASSWORD"
volumes:
- "$PWD/db:/var/lib/mysql"
networks:
- net
networks:
net:
DockerFile:
FROM fedora
RUN yum update -y
RUN yum -y install unzip
RUN yum -y install openssh-server
RUN useradd RemoteUser && \
echo "RemoteUser:Password"| chpasswd && \
mkdir /home/madchabelo/.ssh && \
chmod 700 /home/madchabelo/.ssh
COPY remote-ki.pub /home/madchabelo/.ssh/authorized_keys
RUN chown madchabelo:madchabelo -R /home/madchabelo/.ssh/ && \
chmod 600 /home/madchabelo/.ssh/authorized_keys
RUN ssh-keygen -A
RUN yum -y install mysql
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
sudo ./aws/install
RUN yum -y install vim
CMD /usr/sbin/sshd -D
I try with the IP and I get the same.
regards
When creating a private key, you should create a code with the following command after version ubuntu 20.04.
ssh-keygen -t ecdsa -m PEM -f remote-key
For a more detailed explanation, see the link below:
https://community.jenkins.io/t/ssh-connection-auth-fail/4121/7
I have created this simple docker-compose.yml where there are two services. One is the main service (ubuntu) which I want to execute docker commands isolated from docker host. The other one is the docker dind service without TLS, which should act as docker daemon for the Ubuntu container.
docker-compose.yml
version: '3.9'
services:
dind:
image: docker:dind
container_name: dind
privileged: true
restart: unless-stopped
ubuntu:
build: .
container_name: ubuntu
privileged: true
stdin_open: true
tty: true
environment:
DOCKER_HOST: tcp://dind:2375
depends_on:
- dind
This is also the Dockerfile needed to build ubuntu service:
Dockerfile
FROM ubuntu:focal
ARG DEBIAN_FRONTEND=noninteractive
# Configure APT
RUN apt-get update \
&& apt-get -y install \
apt-utils \
dialog \
fakeroot \
software-properties-common
RUN apt-get update && apt-get -y install \
ca-certificates \
curl \
gnupg \
lsb-release \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg \
&& echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null \
&& apt-get update && apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
I'm trying to use docker-compose up and the exec docker ps into the container. But it cannot connect to the docker daemon running on dind service:
eduardo#pc:~$ docker-compose up -d
dind is up-to-date
ubuntu is up-to-date
eduardo#pc:~$ docker exec -it ubuntu docker ps
Cannot connect to the Docker daemon at tcp://dind:2375. Is the docker daemon running?
What I don't understand is why it doesn't detect the daemon running in dind from the Ubuntu container.
Is there any solution to this problem? If there is no request without TLS, it can also be done with TLS, I don't care.
Edit: I checked if DinD container is running at the time I execute docker ps in ubuntu container and yes is running.
eduardo#pc:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fdc141223f33 docker:dind "dockerd-entrypoint.…" About a minute ago Up About a minute 2375-2376/tcp dind
bb68d3298522 docker-compose-example_ubuntu "bash" 3 minutes ago Up 3 minutes ubuntu
Here is a working example with more recent versions (it does use TLS):
version: '3'
services:
docker:
image: docker:20.10.17-dind-alpine3.16
privileged: yes
volumes:
- certs:/certs
docker-client:
image: docker:20.10.17-cli
command: sh -c 'while [ 1 ]; do sleep 1; done'
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: /certs/client
volumes:
- certs:/certs
volumes:
certs:
It seems that using docker:18.09-dind as base image instead of docker:dind works:
version: '3.9'
services:
dind:
image: docker:18.09-dind
container_name: dind
privileged: true
restart: unless-stopped
ubuntu:
build: .
container_name: ubuntu
privileged: true
stdin_open: true
tty: true
environment:
DOCKER_HOST: tcp://dind:2375
depends_on:
- dind
Output:
eduardo#pc:~$ docker-compose up -d
dind is up-to-date
ubuntu is up-to-date
eduardo#pc:~$ docker exec -it ubuntu docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I have a docker-compose file that runs a few services.
services:
cli:
build:
context: .
dockerfile: docker/cli/Dockerfile
volumes:
- ./drupal8site:/var/www/html/drupal8site
drupal:
container_name: drupal
build:
context: .
dockerfile: docker/DockerFile.drupal
args:
DOC_ROOT: /var/www/html/drupal8site
ports:
- 80:80
volumes:
- ./drupal8site:/var/www/html/drupal8site
restart: always
environment:
APACHE_DOCUMENT_ROOT: /var/www/html/drupal8site/web
mysql:
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
I would like to add another service which will be a container in which I could run CLI commands (composer, drush for drupal, php, etc).
The following Dockerfile was how I initially defined the cli service but it stops right after it is run. How do I define it so it is part of my docker-compose, shares my mounted volume, and I can interactively connect to it and run CLI commands on it ?
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
CMD ["bash"]
Thanks,
Yaron
If you want to run automated scripts on docker images this is obviously a job for a ci-pipeline. You can use CloudFoundry or OpenStack to do this.
But there are many other questions in this post:
1.) How can i share my mounted volume:
You can pass a volume with the -v option to a container. e.g.:
docker run -it -d -v $(pwd)/localFolder:/exposedFolderFromDocker mydockerhub/myawesomeimage
2.) Can I interactively connect to it and run CLI commands on it
docker exec -it docker_cli_1 bash
I recommend to implement features of an docker-image to the individual docker-images Dockerfile. For example copying and running a prepared shell-script:
# your Dockerfile
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
# individual changes
COPY your_script.sh /
RUN chown root:root /your_script.sh && \
chmod 0755 /your_script.sh
CMD ["/your_script.sh"]
# a folder to expose
VOLUME /exposedFolderFromDocker
CMD ["bash"]
I tried the approach with tty: true stdin_open: true inside docker-compose.yml and attaching to the container id (following http://www.chris-kelly.net/2016/07/25/debugging-rails-with-pry-within-a-docker-container/) but it just hangs.
I also tried docker-compose run --service-ports web following this article https://blog.carbonfive.com/2015/03/17/docker-rails-docker-compose-together-in-your-development-workflow/ but it also hangs the request when binding.pry
Could supervisord affect this?
Here's the Dockerfile:
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev supervisor
RUN curl -sL https://deb.nodesource.com/setup_9.x | bash - && apt-get install -yq nodejs
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install yarn
RUN mkdir /example
WORKDIR /example
COPY Gemfile /example/Gemfile
COPY Gemfile.lock /example/Gemfile.lock
RUN bundle install
COPY . /example
COPY docker/supervisor.conf /etc/supervisor/conf.d/example.conf
RUN cd client-app && npm install
CMD supervisord -n
And the docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
environment:
API_HOST: http://localhost:3000/api
volumes:
- .:/example
ports:
- "3000:3000"
- "4200:4200"
depends_on:
- db
And the supervisor.conf:
[program:rails]
directory=/example
command=rails server -b 0.0.0.0 -p 3000
autostart=true
autorestart=true
[program:npm]
directory=/example
command=/bin/bash -c "yarn && cd client-app && npm run docker-start"
autostart=true
autorestart=true
(assuming you're running postgres something like docker run -d --name=postgres ...)
I would ditch compose and try
docker build -t web .
docker run --link postgres:db -it -p 3000:3000 -p 4200:4200 web
If that fails, I'd punt and
docker run --link postgres:db -it -p 3000:3000 -p 4200:4200 web bash
Then try running rails s, rails c, etc manually within the container.
Since I installed Capybara-webkit, I can't launch my specs with docker compose. The next command stays on hold:
$ docker-compose run web xvfb-run -a bundle exec rspec
I thought I have a problem with Capybara-webkit, so I created a SO question and an issue on the repo, but it seems it's more a pb of interaction between docker-compose and xvfb.
If I do first
$ docker-compose run web bash
then
$ xvfb-run -a bundle exec rspec spec
it works fine. I have no clue.
Edit 31/08/17
As requested, here is the docker-compose file:
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=XXXXX
volumes:
- mysql-data:/var/lib/mysql
redis:
image: redis
ports:
- "6379:6379"
volumes:
- redis:/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app_dir
- app-gems:/usr/local/bundle
ports:
- "3000:3000"
depends_on:
- db
- redis
volumes:
mysql-data:
driver: local
redis:
driver: local
app-gems:
driver: local
And the Dockerfile:
FROM ruby:2.4.1
RUN apt-get update -qq && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
xvfb \
qt5-default \
libqt5webkit5-dev \
gstreamer1.0-plugins-base \
gstreamer1.0-tools \
gstreamer1.0-x
RUN mkdir /app_dir
WORKDIR /app_dir
ADD Gemfile* /app_dir/
RUN bundle install
COPY . .
In docker-compose.yml
command: ./start.sh
And in start.sh file
#!/bin/bash
xvfb-run "run whatever"
Posting comments as answer since I need formatting
Can you try changing below
command: bundle exec rails s -p 3000 -b '0.0.0.0'
to
entrypoint: xvfb-run -a bundle exec rspec
and try docker-compose up
Also if that doesn't work then try adding tty: true to the service