Publish multiple images on hub.docker.com in a single repository - docker

I am new to Docker so and this is giving me a headache. I finish developing a site in Magento linking multiple images using docker-compose.yml.
Here is my docker-compose.yml
version: '3'
services:
web:
image: webdevops/php-apache-dev:7.1
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- "./:/app:cached"
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "52000:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
Then docker-compose up -d --build. I have 3 images and 3 containers running on my local machine.
I want to publish these image on hub.docker.com so anyone can download the image and get all the containers running.
Also is there a way to add a MySQL DB to the image, so anyone can have the same running website like I had on my local?

Remember that the only thing you can publish on Docker Hub is Docker images; you can't publish containers, volumes, Docker Compose YAML files, or other artifacts. Since the YAML file is a fairly straightforward text file it's very common to publish that on GitHub, along with a README file explaining how to use it.
You don't need to push the phpmyadmin/phpmyadmin or mariadb images because those are standard Docker Hub images, so you only need to push your custom image. I would highly recommend removing the volumes: that mounts your local development tree over the image contents to validate that the image actually has what you expect.
Is there a way to add mysql DB to the image
No. The various standard Docker database images are built in a way that it is extremely difficult to build an image containing prepopulated data. Wordpress image with mysql data has some good discussion on the topic, and MySQL Docker container is not saving data to new image has some good analysis in the question proper.

Related

Is it possible to download working image from live server and run it as local?

I have a project which runs on oro commerce within 2 docker containers:
web
database
I tried to launch project without containers on apache. I always get problems with extensions and other stuff, now I have 2 tasks.
I need to launch project locally and migrate it to another live server. What are my options? (I really dont know much about docker) Is it possible to download all ready containers or images and run it locally? Where should I look to build a picture with steps for a task?
I managed to pull project from git and tried to docker-compose it, but it seems it is loading images from github or something
I will leave docker-compose file below
version: '3.6'
services:
database:
image: registry.gitlab.com/ubiedigital/kauno-grudai/server/database:latest
container_name: database
networks:
- kggroup_default
ports:
- 3306:3306
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
restart: always
web-stage:
image: registry.gitlab.com/ubiedigital/kauno-grudai/server/web:latest
container_name: web-stage
networks:
- kggroup_default
ports:
- 8000:80
- 4434:443
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./html-stage/crm:/var/www/html
environment:
- SERVER_NAME=${SERVER_NAME}
- LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
- ORACLE_HOME=/usr/lib/oracle/12.2/client64
depends_on:
- database
restart: always
web-master:
image: registry.gitlab.com/ubiedigital/kauno-grudai/server/web:latest
container_name: web-master
networks:
- kggroup_default
ports:
- 8080:80
- 4433:443
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./html-master/crm:/var/www/html
environment:
- SERVER_NAME=${SERVER_NAME}
- LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
- ORACLE_HOME=/usr/lib/oracle/12.2/client64
depends_on:
- database
restart: always
networks:
kggroup_default:
name: kggroup_default
Containers are not downloaded.
There are docker images in a docker image server. This server can be registry.gitlab.com, dockerhub or whatever.
Then, containers are instances of those docker images.
So, when you do docker compose up -d, you're automatically downloading these images and creating containers in your local.
In order to install it in other servers, you've just to execute again the deploy command (docker-compose up -d with what parameters you need, such as environment setting) in other servers.
You can export and import your docker image.
First, stop your container then
Find the name of the container that you would like to move
$ docker ps -a
then
$ docker save mycontainername > /path/to/folder/mycontainername.tar
Export mycontainername.tar on your new location
then
$ docker load mycontainername < /path/to/folder/mycontainername.tar

Docker shared volume is not readable for a container after changing volume contents

I have got following compose file where i'm sharing some generated html data from Jenkins container to the host drive and reading this data by Nginx container from the host drive. I'm using Ubuntu Server 18.04 on AWS.
The problem is that I can read contents of the jenkins/workspace/allure-report only once. After updating of the html data it becomes inaccessible for Nginx and it throws 403 status code.
I tried all the possible solutions but nothing works. The only ugly solution is to restart Nginx container after every html data updating. I don't like this way and looking for some inbuilt docker features to resolve this.
What didn't help: sharing volume straight between containers without using docker host drive, using rslave option, using docker separate volume that can be used as buffer between the two containers... I believe it should be much more easier!
version: '2'
services:
jenkins:
container_name: jenkins
image: "jenkins/jenkins"
ports:
- "8088:8080"
- "50000:50000"
env_file:
- variables.env
volumes:
- ./jenkins:/var/jenkins_home
selenoid:
container_name: selenoid
network_mode: bridge
image: "aerokube/selenoid"
# default directory for browsers.json is /etc/selenoid/
command: -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -timeout 3m
ports:
- "4444:4444"
env_file:
- variables.env
volumes:
- $PWD:/etc/selenoid/ # assumed current dir contains browsers.json
- /var/run/docker.sock:/var/run/docker.sock
selenoid-ui:
container_name: selenoid-ui
network_mode: bridge
image: "aerokube/selenoid-ui"
links:
- selenoid
ports:
- "8080:8080"
env_file:
- variables.env
command: ["--selenoid-uri", "http://selenoid:4444"]
nginx:
container_name: nginx
image: "nginx"
ports:
- "80:80"
volumes:
- ./jenkins/workspace/allure-report:/usr/share/nginx/html:ro,rslave
Found the solution: the easiest way to get access to the dynamic data is to use volumes_from in that container you want to look from.
When I configured my compose file like that I faced another issue - the 403 status has gone but the data was static. But that was my fault, I didn't use "cp -r " command correctly so my data has been copied only once.

Docker-volume of webroot not editable on host machine

I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.

How do I retain my content types within a dockerized strapi

I've been using strapi for docker (https://github.com/strapi/strapi-docker), but whenever I rebuild my container the data all disappears. I can still see it in the database, but the admin isn't recognizing it.
I tried recreating the content type, and then the records from the database appeared, but when I rebuild the container again the content type disappears
Where are content definitions stored? Is this a bug with the app? (I think strapi-docker is using an alpha release)
How to I get strapi to retain my content definitions in the database, so I can use a stateless container?
UPDATE
I tried looking at the attached volume -
api:
build: .
env_file: './dev.env'
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
restart: always
But there's nothing in it -
Aidans-MacBook:strapi-docker aidan$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02b098286ada strapi-docker_api "docker-entrypoint.s…" 24 minutes ago Up 5 minutes (healthy) 0.0.0.0:1337->1337/tcp strapi-docker_api_1
Aidans-MacBook:strapi-docker aidan$ docker inspect -f "{{.Mounts}}" 02b098286ada
[{bind /Users/aidan/Documents/Code/beefbook/strapi-docker/strapi-app /usr/src/api/strapi-app rw true rprivate}]
Aidans-MacBook:strapi-docker aidan$ ls /Users/aidan/Documents/Code/beefbook/strapi-docker/strapi-app
Aidans-MacBook:strapi-docker aidan$
you need to mount the directory to keep the file persistent.
- ./strapi-app:/usr/src/api/strapi-app For application
- ./db:/data/db For DB
version: '3'
services:
api:
build: .
image: strapi/strapi
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=strapi
- HOST=localhost
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
depends_on:
- db
restart: always
db:
image: mongo
environment:
- MONGO_INITDB_DATABASE=strapi
ports:
- 27017:27017
volumes:
- ./db:/data/db
restart: always
Run the docker-compose up and you will see the data is now has been persistent.
updated:
After investigation by #Aidan
it's used the APP_NAME env var (the default is "strapi-app"). so the correct mount is /usr/src/api/beef-content (since, in my case, the APP_NAME is beef-content). I'll use that to mount my volume
I may have a solution with my project "strapidocker-tools":
https://github.com/OliCpg/strapidocker-tools
This will let you backup, move and restore full dockerised strapi project.
Be aware that it will work with two docker containers named strapi and strapi_db. You should not rename them (i'll change that later on). They get recreated upon restore.
Still a work in progress and not very elegant at the present time but it works for me.
Feedback is welcome.

Join docker images in a single container [duplicate]

This question already has an answer here:
Build a single image based on docker compose containers
(1 answer)
Closed 9 months ago.
I have an application composed of a front end, a back end and a mongodb database, each of these dockerized in a container. When I build them with docker compose I have as many images as parts in my application (3).
Is there any way to build a single container from these 3 images and therefore a single image?
Thanks
You can write a Dockerfile if you want to run your application as a single container. it will give you single image as well.
I guess you could do this if you really wanted to. The preferred way is to use docker-compose for this. I would suggest that you create a docker-compose.yml file that helps you setup this:
nginx->frontend (possibly with server side rendering) -> backend -> mongodb
The idea behind docker-compose is to easily get that multi container application up and running using a docker-compose.yml file , then you can just bring up the application with:
$ docker-compose up
You could it setup with something like this:
(This is a hypothetical docker-compose.yml file, but with your correct values it should work. Let me know if you have any questions:
version: '2'
services:
frontend-container:
image: frontend:latest
links:
- backend-container
environment:
- DEBUG=True
restart: always
environment:
- BASE_HOST=http://backend-container:8000/
backend-container:
image: nodejs-backend:latest
links:
- mongodb
environment:
- NODE_ENV=production
- BASE_HOST=http://django-container:8000/
restart: always
mongodb:
image: mongo:latest
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
command: mongod --smallfiles --logpath=/dev/null
nginx-container:
image: nginx-container-custom-config:latest
links:
- frontend-container
ports:
- "80:80"

Resources