I have this docker-file
services:
db:
# We use a mariadb image which supports both amd64 & arm64 architecture
image: mariadb:10.6.4-focal
# If you really want to use MySQL, uncomment the following line
#image: mysql:8.0.27
command: '--default-authentication-plugin=mysql_native_password'
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
wordpress:
image: wordpress:latest
volumes:
- wp_data:/var/www/html
ports:
- 80:80
restart: always
environment:
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
volumes:
db_data:
wp_data:
I run this and install WordPress but I want to learn to make templates and plugins so I need to edit WP files. How can I do that?
After starting your containers using the compose file:
List the running containers: docker ps
Check which of the running containers has the image as wordpress:latest and copy the id of the container associated with it
Enter the container by running docker exec -it <you-container-id> /bin/sh
And now you have a session inside of the container. You can edit the files inside with vi (not the most ideal).
Look up docker volumes if you want to edit the files locally and have them be mapped inside of the containers.
Related
I have successfully containerized my basic Yii2 application with docker and it runs on localhost:8000. However, I cannot use the app effectively as most of its data are stored in migration files. Is there a way I could export the migrations into docker after running it? (or during execution)
This is my docker compose file
version: '2'
services:
php:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '8000:80'
networks:
- my-network
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_DATABASE=my-db
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- mydb:/var/lib/mysql
networks:
- my-network
memcached:
container_name: memcached
image: memcached:latest
ports:
- "0.0.0.0:11211:11211"
volumes:
restatdb:
networks:
my-network:
driver: bridge
and my Dockerfile
FROM alpine:3.4
ADD . /
COPY ./config/web.php ./config/web.php
COPY . /var/www/html
# Let docker create a volume for the session dir.
# This keeps the session files even if the container is rebuilt.
VOLUME /var/www/html/var/sessions
It is possible to run yii commands in docker. First let the yii2 container run in the background or another tab of the terminal. The yii commands can be run using the docker exec on the interactive interface which would let us interact with the running container
sudo docker exec -i <container-ID> php yii migrate/up
You can get the container ID using
sudo docker ps
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I have a project which runs on oro commerce within 2 docker containers:
web
database
I tried to launch project without containers on apache. I always get problems with extensions and other stuff, now I have 2 tasks.
I need to launch project locally and migrate it to another live server. What are my options? (I really dont know much about docker) Is it possible to download all ready containers or images and run it locally? Where should I look to build a picture with steps for a task?
I managed to pull project from git and tried to docker-compose it, but it seems it is loading images from github or something
I will leave docker-compose file below
version: '3.6'
services:
database:
image: registry.gitlab.com/ubiedigital/kauno-grudai/server/database:latest
container_name: database
networks:
- kggroup_default
ports:
- 3306:3306
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
restart: always
web-stage:
image: registry.gitlab.com/ubiedigital/kauno-grudai/server/web:latest
container_name: web-stage
networks:
- kggroup_default
ports:
- 8000:80
- 4434:443
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./html-stage/crm:/var/www/html
environment:
- SERVER_NAME=${SERVER_NAME}
- LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
- ORACLE_HOME=/usr/lib/oracle/12.2/client64
depends_on:
- database
restart: always
web-master:
image: registry.gitlab.com/ubiedigital/kauno-grudai/server/web:latest
container_name: web-master
networks:
- kggroup_default
ports:
- 8080:80
- 4433:443
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./html-master/crm:/var/www/html
environment:
- SERVER_NAME=${SERVER_NAME}
- LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
- ORACLE_HOME=/usr/lib/oracle/12.2/client64
depends_on:
- database
restart: always
networks:
kggroup_default:
name: kggroup_default
Containers are not downloaded.
There are docker images in a docker image server. This server can be registry.gitlab.com, dockerhub or whatever.
Then, containers are instances of those docker images.
So, when you do docker compose up -d, you're automatically downloading these images and creating containers in your local.
In order to install it in other servers, you've just to execute again the deploy command (docker-compose up -d with what parameters you need, such as environment setting) in other servers.
You can export and import your docker image.
First, stop your container then
Find the name of the container that you would like to move
$ docker ps -a
then
$ docker save mycontainername > /path/to/folder/mycontainername.tar
Export mycontainername.tar on your new location
then
$ docker load mycontainername < /path/to/folder/mycontainername.tar
I'm running this on debian 9
I'm using sudo docker volume create db to create a volume I'm using in my docker-compose.yml. But I still get the error db_1_d89b59353579 | mkdir: cannot create directory '/var/lib/mysql': Permission denied.
How can I set permissions for the user using that volume. And how to get the user?
Docker-Compose:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql:z
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_PASSWORD=***
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
I got an install.sh file where I run:
...
sudo docker volume create db
sudo docker-compose build
docker-compose up -d
Try to first change the mounts to local folders and see if that fixes your issue:
version: '2'
volumes:
nextcloud:
db:
services:
db:
...
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_PASSWORD=***
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
...
volumes:
- ./nextcloud:/var/www/html
restart: always
If that does then check that the volumes are correctly removed by docker-compose down. Run docker volume ls. If they still persist then remove them by hand and rerun your containers with the volumes.
Regarding the difference between mounting to a volume (db:/var/lib/mysql) and mounting to a host path (./db:/var/lib/mysql):
In the first case it is a volume managed by Docker. It is meant for persistence but getting to the files is a bit more tricky. In the second case it is a path on the host and it makes it a lot easier to retrieve persisted files. I recommend to run "docker-compose config" for both situations and see the difference in how docker-compose internally transforms the statement.
I have successfully created docker containers and they work when loaded using:
sudo docker-compose up -d
The yml is as follows:
services:
nginx:
build: ./nginx
restart: always
ports:
- "80:80"
volumes:
- ./static:/static
links:
- node:node
node:
build: ./node
restart: always
ports:
- "8080:8080"
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
Am I supposed to create a service for this. Reading the documentation I thought that the containers would reload in restart was set to always.
FYI: the yml is inside a projects directory on the home of the base user: ubuntu.
I tried checking for solutions in stack but could not find anything appropriate. Thanks.