Join docker images in a single container [duplicate] - docker

This question already has an answer here:
Build a single image based on docker compose containers
(1 answer)
Closed 9 months ago.
I have an application composed of a front end, a back end and a mongodb database, each of these dockerized in a container. When I build them with docker compose I have as many images as parts in my application (3).
Is there any way to build a single container from these 3 images and therefore a single image?
Thanks

You can write a Dockerfile if you want to run your application as a single container. it will give you single image as well.

I guess you could do this if you really wanted to. The preferred way is to use docker-compose for this. I would suggest that you create a docker-compose.yml file that helps you setup this:
nginx->frontend (possibly with server side rendering) -> backend -> mongodb
The idea behind docker-compose is to easily get that multi container application up and running using a docker-compose.yml file , then you can just bring up the application with:
$ docker-compose up
You could it setup with something like this:
(This is a hypothetical docker-compose.yml file, but with your correct values it should work. Let me know if you have any questions:
version: '2'
services:
frontend-container:
image: frontend:latest
links:
- backend-container
environment:
- DEBUG=True
restart: always
environment:
- BASE_HOST=http://backend-container:8000/
backend-container:
image: nodejs-backend:latest
links:
- mongodb
environment:
- NODE_ENV=production
- BASE_HOST=http://django-container:8000/
restart: always
mongodb:
image: mongo:latest
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
command: mongod --smallfiles --logpath=/dev/null
nginx-container:
image: nginx-container-custom-config:latest
links:
- frontend-container
ports:
- "80:80"

Related

does celery docker container have to copy the same files from django container build?

Here is tha raw question:
i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers on the server or i have to duplicate the data from django project to every container or is there another way?
here is the question with explanation:
i am new to this and i tried to make a docker composer file for a Django-rest project with celery and Rabitmq and PostgreSQL,
i followed bunch of tutorials and i manage to make it work as the celery container use a shared volume from Django to start the worker(also the beat celery container) see the picture for related code from docker compose:(edited part from one of suggestions i put the code instead of pic)
version: '3.8'
services:
api:
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.prod
container_name: 'prod-djbackend'
image: youdeal_djangopart-prod:0.63
restart: unless-stopped
expose:
- '8000'
env_file: .env
volumes:
- static-data:/static
- media-data:/youdeal_djangopart/media
- api_vol:/youdeal_djangopart
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
depends_on:
- db
- rabbitmq
celeryworker:
container_name: celeryworker
image: celeryworker:0.51
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.celery.prod
env_file: .env
volumes:
- ./:/api_vol/
links:
- db
- rabbitmq
- api
depends_on:
- rabbitmq
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
volumes:
api_vol:
now here is the problem when i want to deploy the project the server i am using (https://www.arvancloud.com/en) doesn't really allow shared volume between containers and dont answer well in this regard, i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers or i have to duplicate the data from django project to every container or is there another way?
i find this and some other related topics but i couldn't make it work

Portainer stacks and command line arguments

I have a portainer stack running one container. Lets use microbin as an example.
The docker-compose yaml looks like this:
---
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
This particular container is hosted on docker hub, and the maintainer provides examples of command line arguments that can be appended to the dockerfile to activate various features easily. One example would be:
--no-listing
Disables the /pastalist endpoint, essentially making all pastas private.
So this brings me to my issue. I don't want to maintain my own custom dockerfile, and in the past I have always inserted environment variables into the docker-compose yaml to call features like this.
An example would be like this - I have a stack running for Authentik (a sso/saml/idp gateway with a pretty web interface). You can see the "environment:" section and the variables I am calling.
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2022.5.3}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
# WORKERS: 2
volumes:
- ./media:/media
- ./custom-templates:/templates
- geoip:/geoip
env_file:
- stack.env
So - not knowing how the development side of making these containers and hosting them on docker-hub goes... is there a way for me to use these command line arguments for microbin as environment variables in my docker-compose yaml / stack configuration file, or am I going to have to wait on the maintainer to implement this as a feature?
Thanks for your help in advance.
You can pass command line arguments in your docker-compose.yml file using the command attribute. That assumes of course the process started within the Docker image can deal with those, but that seems to be the case for your image and should generally be the case.
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
command: my command line --args here
See Docker Compose Reference - command for details.

Docker containers are recreated instead of creating new instance [duplicate]

This question already has an answer here:
Docker is not creating new container but recreates running one
(1 answer)
Closed 1 year ago.
I wanted to use docker-compose to spin up new instances of my containers, but with slightly different parameters, so I essentially copied the entire project folder, made changes to my Docker compose file, and did docker-compose up --build but no matter which project folder I run that in, it only recreates the containers rather than spinning up new ones.
Below is my compose file. In one project folder it's this, and in the other, I changed container-name to app-test-client and app-test-api as well as changing the ports (e.g. 8080:80), so why does it recreate instead of spinning up new containers? I want to see both app-client and app-test-client running.
version: '3.2'
services:
client:
build:
context: ./client
container_name: app-client
ports:
- '80:80'
- '5432:5432'
- '443:443'
links:
- api
api:
build:
context: ./api
container_name: app-api
volumes:
- ~/.ssh:/root/.ssh
environment:
# read from ./.env file if it exists
- EDR_ENVIRONMENT=${EDR_ENV}
- SAS_ENVIRONMENT=${SAS_ENV}
command: ['node', '.']
The name is based on the service name, not the container name.
version: '3.2'
services:
client-test:
...
links:
- api
api-test:
...
You can also pass the parameter p to change the project name

Publish multiple images on hub.docker.com in a single repository

I am new to Docker so and this is giving me a headache. I finish developing a site in Magento linking multiple images using docker-compose.yml.
Here is my docker-compose.yml
version: '3'
services:
web:
image: webdevops/php-apache-dev:7.1
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- "./:/app:cached"
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "52000:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
Then docker-compose up -d --build. I have 3 images and 3 containers running on my local machine.
I want to publish these image on hub.docker.com so anyone can download the image and get all the containers running.
Also is there a way to add a MySQL DB to the image, so anyone can have the same running website like I had on my local?
Remember that the only thing you can publish on Docker Hub is Docker images; you can't publish containers, volumes, Docker Compose YAML files, or other artifacts. Since the YAML file is a fairly straightforward text file it's very common to publish that on GitHub, along with a README file explaining how to use it.
You don't need to push the phpmyadmin/phpmyadmin or mariadb images because those are standard Docker Hub images, so you only need to push your custom image. I would highly recommend removing the volumes: that mounts your local development tree over the image contents to validate that the image actually has what you expect.
Is there a way to add mysql DB to the image
No. The various standard Docker database images are built in a way that it is extremely difficult to build an image containing prepopulated data. Wordpress image with mysql data has some good discussion on the topic, and MySQL Docker container is not saving data to new image has some good analysis in the question proper.

docker - multiple databases on local

I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.

Resources