I am downloaded a debian image for docker and i have created a container from it.
I haver successfully installed apache and mysql on this container (from /bin/bash).
I want to make this docker container running in background.
I have tried a lot of tutorials (i have created images with Dockerfile) but nothing really works. Apache and mysql were run as root...
So i have launched this command:
docker run -d -p 80:80 myimagefile /bin/bash -c "while true; do sleep 10; done"
Then i have attached a /bin/bash with exec command and i started manually mysql and apache2 (/etc/init.d/ scripts). When i type CTRL-D, the bash is killed but the container stills in background, with mysql and apache alive !
I am wondering if this method is correct or is it something ugly ? Is there a best way to do this ?
I do not want to write a Dockerfile that describes how to install apache and mysql. I have made my own image, with my application and all prerequisites.
I just want to start a container from my image and start automatically apache and mysql.
I have a second question: With my method, the container is not reloaded if i reboot physical computer. How can i start it automatilcy with persistence of data ?
Thanks
I would suggest using running mysql and apache in separate containers. Additionally the docker hub already has container images that you could re-use:
https://hub.docker.com/_/mysql/
The following is an example of a docker-compose file that describe how to launch Drupal
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- /var/www/html/sites
- /var/www/private
Run as follows
$ docker-compose up -d
Creating dockercompose_db_1
Creating dockercompose_web_1
Which exposes Drupal on port 8080
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note:
When running the drupal installer, configure it to connect to a host called "db", which is the mysql container.
Related
In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash
I run my container by five Docker commands as follows:
docker run --privileged -d -v /root/docker/data:/var/lib/mysql -p 8888:80 testimg:2 init
docker ps ---> to get container ID
docker exec -it container_id bash
docker exec container_id systemctl start mariadb
docker exec container_id systemctl start httpd
I was trying to do these steps by docker-compose but failed.
Can somebody make a docker-compose.yml or Dockerfile to get same result for me?
You're not going to be be able to do this with just a docker-compose.yml, because a compose file doesn't have any mechanism similar to docker exec. Additionally, running systemd (or really any process manager) inside a container is an anti-pattern. It can complicate the management and scaling of your containers, and in most cases doesn't provide you with any benefits.
Why don't you just have two images:
One that starts mariadb
One that starts Apache httpd
That might look something like:
version: "3"
services:
web:
image: httpd
ports:
- "8888:80"
db:
image: mariadb
volumes:
- "/root/docker/data:/var/lib/mysql"
You would probably need a custom image for the web server containing whatever application you're running, but you can definitely use the official mariadb image for your database.
This question already has an answer here:
Deploying docker-compose containers
(1 answer)
Closed 4 years ago.
I have Flask application running under Docker Compose with 2 containers one for Flask and the other one for Nginx.
I am able to run the Flask successfully using docker-compose up --build -d command in my local machine.
What I want is, to save the images into .tar.gz file and move them to the production server and run them automatically. I have used below Bash script to save the Flask and Nginx into one image successfully.
#!/bin/bash
for img in $(docker-compose config | awk '{if ($1 == "image:") print $2;}'); do
images="$images $img"
done
docker save $images | gzip -c > flask_image.tar.gz
I then moved this image flask_image.tar.gz to my production server where Docker installed and used below command to load the image and run the containers.
docker load -i flask_image.ta.gz
This command loaded every layer and loaded the image into my production server. But containers are not up which is expected, since I used only load command.
My question is, is there any command that can load the image and up the containers automatically?
docker-compose.yml
version: '3'
services:
api:
container_name: flask
image: flask_img
restart: always
build: ./app
volumes:
- ~/docker_data/api:/app/uploads
ports:
- "8000:5000"
command: gunicorn -w 1 -b :5000 wsgi:app -t 900
proxy:
container_name: nginx
image: proxy_img
restart: always
build: ./nginx
volumes:
- ~/docker_data/nginx:/var/log/nginx/
ports:
- "85:80"
depends_on:
- api
Since you mention you already are pushing the docker image to Docker Hub, that means the image has a dockerhub tag that you can use to pull it also.
Usually I use something like this to pull images that are on a registry:
docker run --rm -d --restart=always -p 80:8080 my-dockerhub-user/my-image-name:my-tag
which would run the container in daemon mode and restart if it were to fail. That's just an example; you'd want the ports to align with whatever flask is listening on (8080 in my example) and and the what your server should be listening on (80 in my example).
The server will automatically pull the image down and run it. You can use tags to promote new images, but in that case you'll have to kill the old container as well.
I am trying to execute multiple docker images run from single docker file with different ports.
Please advise How to execute multiple "docker run" commands from single docker file with different ports.
You want to use docker-compose it sounds like. Here is an example using nginx and redis (It's how I do it anyway)
services:
nginx:
image: nginx
ports:
- "80:80"
redis:
image: redis
ports:
- "1000:1000"
So as you can see, if I run docker-compose up, docker will spin up two containers, nginx and redis, each running off of a different port! If you don't want to you docker-compose, you can do it from docker run
docker run --name nginx -p 1000:10001
docker run --name redis -p 3333:2423
I don't 100% understand your question, but I hope this helps!
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose