Problem with create Dockercompose and Dockerfile. Causes "Error response from daemon" - docker

This is my first dockerfile project with docker-compose. In my project I try create a docker-compose file.
node/Dockerfile
FROM centos:latest
MAINTAINER braulio#braulioti.com.br
LABEL Description="Site Brau.io - API NodeJS"
EXPOSE 3000
RUN yum update -y \
&& yum install -y java-1.8.0-openjdk \
&& yum update -y \
&& yum install -y epel-release \
&& yum install -y nodejs \
&& yum install -y psmisc \
&& npm install -g forever \
&& npm install -g typescript
RUN rm -rf /etc/localtime && ln -s /usr/share/zoneinfo/Brazil/East /etc/localtime
RUN mkdir -p /app
VOLUME ["/app"]
docker-compose.yml
version: '3'
services:
node:
build: node
image: docker_node
ports:
- "8082:3000"
container_name: "brau_io_api"
volumes:
- /app/brau_io/api:/app/
command: /bin/bash
This project result in:
Error response from daemon: Container 65cecc8bdc923c3f596dba91fd059b8268fd390737391d4d91afa7d34325bea1 is not running

In docker-compose you should create some services and you can link them. for example:
docker-compose.yml
version: '3'
services:
my_app:
build: .
image: my_app:1.0.0
container_name: my_app_container
command: ... # you can run a bash file or a command
I created a docker-compose with my_app service which it can create my_app image.
you can rewrite it with your node container.
Reference

I enabled the tty function in my docker-compose.yml file and works like a chaming (See Reference).
This is my final docker-compose.yml file:
docker-compose.yml
version: '3'
services:
node:
build: node
image: docker_node
ports:
- "8082:3000"
container_name: "brau_io_api"
volumes:
- /app/brau_io/api:/app/
command: /bin/bash
tty: true

Related

Jenkins SSH remote hosts Can't connect to server

I can access to the target with ssh password and with the private key from Jenkins bash, I configured SSH sites on jenkins with the same host, User and private key I get the next error:
Docker logs:
2022-09-23 05:06:52.357+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Auth fail 2022-09-23 05:06:52.367+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Can't connect to server
Docker-compose:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: fedora
dockerfile: Dockerfile
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=PASSWORD"
volumes:
- "$PWD/db:/var/lib/mysql"
networks:
- net
networks:
net:
DockerFile:
FROM fedora
RUN yum update -y
RUN yum -y install unzip
RUN yum -y install openssh-server
RUN useradd RemoteUser && \
echo "RemoteUser:Password"| chpasswd && \
mkdir /home/madchabelo/.ssh && \
chmod 700 /home/madchabelo/.ssh
COPY remote-ki.pub /home/madchabelo/.ssh/authorized_keys
RUN chown madchabelo:madchabelo -R /home/madchabelo/.ssh/ && \
chmod 600 /home/madchabelo/.ssh/authorized_keys
RUN ssh-keygen -A
RUN yum -y install mysql
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
sudo ./aws/install
RUN yum -y install vim
CMD /usr/sbin/sshd -D
I try with the IP and I get the same.
regards
When creating a private key, you should create a code with the following command after version ubuntu 20.04.
ssh-keygen -t ecdsa -m PEM -f remote-key
For a more detailed explanation, see the link below:
https://community.jenkins.io/t/ssh-connection-auth-fail/4121/7

build path either does not exist, is not accessible, or is not a valid URL

I tryna figure out docker to run my django rest framework + vue.js project in clouds. I built Dockerfile and docker-compose.yml file to start an ubuntu machine and run the postgresql, vue.js and drf containers. But when I try run docker-compose build I get the following message:
build path either does not exist, is not accessible, or is not a valid URL
Here is my Dockerfile:
RUN apt-get update && upt-get install -y \
gcc \
musl-dev \
node.js \
postgresql-server-dev-10 \
apt-utils \
python3.7 \
python3.7-dev \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN npm install webpack#2.9
WORKDIR /app
COPY requirements.txt /app
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /app
docker-compose.yml:
version: '3.5'
services:
postgres:
image: postgres:10
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 8599
POSTGRES_DB: adserver
volumes:
- adserver-data/postgresql/data:/var/lib/postgresql/data
restart: always
rest_framework:
build:
context: ./app/adserver
dockerfile: Dockerfile
depends_on:
- postgres
command: ['python manage.py runserver']
restart: always
vue:
build:
context: ./app/adserver-vue
depends_on:
- rest_framework
command: ['npm run watch']
Please tell me what am I doing wrong?
Verify the name of folders because the folder app/adserver-vue needs to exist with the name equal in docker-compose.yml

Docker not found inside container instead of passed daemon as a volume

Can anybody help me? This docker-compose file worked for me a few days ago with docker command available inside the container, but now it throws: docker: not found inside.
The docker daemon on the host is on /usr/local/bin/docker. It's a mac.
Any idea? Could you help me to try this on yours guys? Thks
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins
build:
context: jenkins
# entrypoint: /var/jenkins_home/entrypoint
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
- AWS_ACCESS_KEY_ID=xxxxx
- AWS_SECRET_ACCESS_KEY=xxxxx
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=1234
networks:
- net
networks:
net:
Dockerfile for remote_host service is the following:
RUN yum install -y openssh-server
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen > /dev/null 2>&1
RUN yum install -y mysql
RUN yum install -y epel-release && \
yum install -y python-pip && \
pip install --upgrade pip && \
pip install awscli
# CMD /usr/sbin/sshd-keygen -D
CMD tail -f /dev/null

How to define a docker cli service in docker-compose

I have a docker-compose file that runs a few services.
services:
cli:
build:
context: .
dockerfile: docker/cli/Dockerfile
volumes:
- ./drupal8site:/var/www/html/drupal8site
drupal:
container_name: drupal
build:
context: .
dockerfile: docker/DockerFile.drupal
args:
DOC_ROOT: /var/www/html/drupal8site
ports:
- 80:80
volumes:
- ./drupal8site:/var/www/html/drupal8site
restart: always
environment:
APACHE_DOCUMENT_ROOT: /var/www/html/drupal8site/web
mysql:
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
I would like to add another service which will be a container in which I could run CLI commands (composer, drush for drupal, php, etc).
The following Dockerfile was how I initially defined the cli service but it stops right after it is run. How do I define it so it is part of my docker-compose, shares my mounted volume, and I can interactively connect to it and run CLI commands on it ?
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
CMD ["bash"]
Thanks,
Yaron
If you want to run automated scripts on docker images this is obviously a job for a ci-pipeline. You can use CloudFoundry or OpenStack to do this.
But there are many other questions in this post:
1.) How can i share my mounted volume:
You can pass a volume with the -v option to a container. e.g.:
docker run -it -d -v $(pwd)/localFolder:/exposedFolderFromDocker mydockerhub/myawesomeimage
2.) Can I interactively connect to it and run CLI commands on it
docker exec -it docker_cli_1 bash
I recommend to implement features of an docker-image to the individual docker-images Dockerfile. For example copying and running a prepared shell-script:
# your Dockerfile
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
# individual changes
COPY your_script.sh /
RUN chown root:root /your_script.sh && \
chmod 0755 /your_script.sh
CMD ["/your_script.sh"]
# a folder to expose
VOLUME /exposedFolderFromDocker
CMD ["bash"]

How to run php-fpm in docker-compose.yml?

I tried to build a container used docker-compose. So I wrote the dockerfile and docker-compose.yml like following:
dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y expect
RUN apt-get -y install software-properties-common
RUN apt-add-repository ppa:ondrej/php
RUN apt-get -y install php7.1 php7.1-fpm
RUN apt-get install php7.1-mysql
RUN apt-get -y install nginx
RUN apt-get -y install vim
COPY default /etc/nginx/sites-available/default
COPY www.conf /etc/php/7.1/fpm/pool.d/www.conf
COPY test /var/www/html/test
CMD service php7.1-fpm start && nginx -g "daemon off;"
docker-compose.yml
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3011:80"
When I run following command, the php7.1-fpm is run success.
docker-compose build
docker-compose up --force-recreate -d
But I want to move the CMD from dockerfile to docker-compose, so I changed the file like following:
docker-compose.yml
command: service php7.1-fpm start && nginx -g "daemon off;"
But this time php7.1-fpm is not running.
How to fix this issue, so that I can run php7.1-fpm in docker-compose.yml?
you can not use service php7.1-fpm start in your Dockerfile, because container is just a process, not a real virtual machine, main process down and others will down neither
docker suggest you to divide them in different container, php-fpm, nginx, single image single container
solution:
docker/php-fpm/Dockerfile
FROM php:7.2-fpm
RUN docker-php-ext-install pdo pdo_mysql mbstring
docker-compose.yml:
version: '2.1'
services:
nginx:
image: nginx:latest
ports:
- 8001:80
volumes:
- ./:/app
# nginx configs
- ./docker/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
php-fpm:
build: ./docker/php-fpm
volumes:
- ./:/app
php-composer:
restart: 'no'
image: composer
volumes:
- ./:/app
command: install
nodejs:
restart: 'no'
image: node:8.9
volumes:
- ./:/app
command: /bin/bash -c "cd /app && npm install && npm run prod"
networks:
default:

Resources