I have successfully containerized my basic Yii2 application with docker and it runs on localhost:8000. However, I cannot use the app effectively as most of its data are stored in migration files. Is there a way I could export the migrations into docker after running it? (or during execution)
This is my docker compose file
version: '2'
services:
php:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '8000:80'
networks:
- my-network
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_DATABASE=my-db
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- mydb:/var/lib/mysql
networks:
- my-network
memcached:
container_name: memcached
image: memcached:latest
ports:
- "0.0.0.0:11211:11211"
volumes:
restatdb:
networks:
my-network:
driver: bridge
and my Dockerfile
FROM alpine:3.4
ADD . /
COPY ./config/web.php ./config/web.php
COPY . /var/www/html
# Let docker create a volume for the session dir.
# This keeps the session files even if the container is rebuilt.
VOLUME /var/www/html/var/sessions
It is possible to run yii commands in docker. First let the yii2 container run in the background or another tab of the terminal. The yii commands can be run using the docker exec on the interactive interface which would let us interact with the running container
sudo docker exec -i <container-ID> php yii migrate/up
You can get the container ID using
sudo docker ps
Related
I have this docker-file
services:
db:
# We use a mariadb image which supports both amd64 & arm64 architecture
image: mariadb:10.6.4-focal
# If you really want to use MySQL, uncomment the following line
#image: mysql:8.0.27
command: '--default-authentication-plugin=mysql_native_password'
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
wordpress:
image: wordpress:latest
volumes:
- wp_data:/var/www/html
ports:
- 80:80
restart: always
environment:
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
volumes:
db_data:
wp_data:
I run this and install WordPress but I want to learn to make templates and plugins so I need to edit WP files. How can I do that?
After starting your containers using the compose file:
List the running containers: docker ps
Check which of the running containers has the image as wordpress:latest and copy the id of the container associated with it
Enter the container by running docker exec -it <you-container-id> /bin/sh
And now you have a session inside of the container. You can edit the files inside with vi (not the most ideal).
Look up docker volumes if you want to edit the files locally and have them be mapped inside of the containers.
I am using docker-compose and here is my docker-compose.yaml file:
version: "3.7"
services:
node:
container_name: my-app
image: my-app
build:
context: ./my-app-directoty
dockerfile: Dockerfile
command: npm run dev
environment:
MONGO_URL: my-database
port: 3000
volumes:
- ./my-app-directory/src:/app/src
- ./my-app-directory/node_modules:/app/node_modules
ports:
- "3000:3000"
networks:
- my-app-network
depends_on:
- my-database
my-database:
container_name: my-database
image: mongo
ports:
- "27017:27017"
networks:
- my-app-network
networks:
my-app-network:
driver: bridge
I expect to find a clear and newly created database each time I run the following command:
docker-compose build
docker-compose up
But this is not the case. When I bring the containers up with docker-compose up, my database has the exact state of the last time I shut it down with docker-compose down command. And since I have not specified a volume prop in my-database object, is this normal behaviour? Does this mean that no other action to persisting database state is required? And can I use this in production if I ever choose to use docker-compose?
The mongo image define the following volumes:
/data/configdb
/data/db
So docker-volume will create and use a unamed volume for data/db.
If you want to have a new one, use:
docker-compose down -v
docker-compose up -d --build
Or use a mount point mounted on the volume location like:
volumes:
- ./db:/data/db:rw
And drop your local db directories when you want to start over.
I am trying to learn kong, using docker-compose, i am able to run kong+konga and create services. But whenever i do docker-compose down and then up again i lose all my data:
kong:
container_name: kong
image: kong:2.1.4-alpine
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.40
volumes:
- kong_data:/usr/local/kong/declarative
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: password
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DB_UPDATE_FREQUENCY: 1m
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
ports:
- "8001:8001"
- "8444:8444"
- "8000:8000"
- "8443:8443"
Looks like volume mapping not working. pleasE help
If you want to keep data when your kong docker-compose is down it is better to use kong in database mode.
So then you will create a persistent volume for your database and it will keep your changes.
By the kong manual you will find there are two type of database supported: postgresql and cassandra
Postgresql is my choice for small project as I'm not planning for huge horizontal scale with cassandra database.
As you will find in the manual starting your project with docker and database is very simple.
But remember to add a volume to your database service as in the sample mentioned in manual there is no volume.
For postgresql you can add: -v /custom/mount:/var/lib/postgresql/data in docker run command
or
volumes:
postgress-data:
driver: local
services:
postgress:
restart: unless-stopped
image: postgres:latest
environment:
- POSTGRES_USER=your_db_user
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=your_db_password
volumes:
- postgres-data:/var/lib/postgresql/data
Answer : You should use docker volume for having persistent data
As reference says :
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
First step is to create a volume that you want your host and docker container communicate using :
docker volume create new-volume
Second step is to use that volume in a docker-compose (in your case)
A single docker compose service with a volume looks like this:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I have compose file as follows;
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
links:
- redis:redis
networks:
- code-network
I'm entering into php container with the following command.
docker exec -it php_id /bin/bash
but I can't run "redis-cli" command in this container. What do I need to do to run it.
I added "links" parameter to compose file but it didn't.
You are putting the php-fpm container in a network of its own. Here is a fixed compose file:
version: "3"
services:
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
networks:
- code-network
- default
networks:
code-network:
See this for more info on compose networking.
About the redis-cli issue: You'd need to add the appropriate repository on the php-fpm container and then install it. As you are using the php:fpm image, you propably want to use redis with some php-application, therefore you don't need debians redis-cli package, but rather the php-extension.
See this post for more info.