more-question-detail
so, where can I find the data of ghost-blog?
If we take a look at the description for ghost image on dockerhub, you can find this:
This Docker image for Ghost uses SQLite. There is nothing special to configure.
Moreover, if we open the latest docker file (which at the moment is 2.9.1, 2.9, 2, latest (2/debian/Dockerfile) ), whe can see the following line:
gosu node yarn add "sqlite3#$sqlite3Version" --force --build-from-source; \
This installs an sqlite database inside the ghost container. This database will contain all your posts, users, etc.
same image ghost2.9
the guide from docker-ghost
run with: docker-compose -f stack.yml up
then I can get data from mysql docker container.
why cannot by this way question-detail, which confused me~ /(ćoć)/~~
version: '3.1'
services:
ghost:
image: ghost:2.9
ports:
- 2368:2368
volumes:
- $PWD/content:/var/lib/ghost/content
environment:
# see https://docs.ghost.org/docs/config#section-running-ghost-with-config-env-variables
database__client: mysql
database__connection__host: localhost
database__connection__user: root
database__connection__password: anywhere
database__connection__database: ghost
db:
image: mysql:5.7
volumes:
- $PWD/data:/var/lib/mysql
ports:
- 3307:3306
environment:
MYSQL_ROOT_PASSWORD: anywhere
Related
I am currently aware of two ways to get an existing MySQL database into a database Docker container. docker-entrypoint-initdb.d and source /dumps/dump.sql. I am new to dockering and would like to know if there are any differences between the two approaches. Or are there special use cases where one or the other approach is used? Thank you!
Update
How i use source:
In my docker-compose.yml file i have this few lines:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/home/dumps # <--- this is for the dump
docker exec -it my_mysql bash then
mysql -uroot -p then
create DATABASE newDB; then
use newDB; then
source /home/dumps/dump.sql
How i use docker-entrypoint-initdb.d:
But it not works.
On my host i create the folder dumps and put this dump.sql in it.
My docker-compose.yml file:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/docker-entrypoint-initdb.d
Then: docker-compose up. But I can't find the dump in my database. I must be doing something wrong.
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
I am using mariadb as mysql docker container and am having trouble uploading the data from the docker volume.
My database dockerfile is similar to the one posted at this link. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/install_and_deploy_a_mariadb_container
Instead of importing the data as shown in the example in the above link, I would like to import it from a docker volume.
I did try using the docker-entrypoint.sh example where I loop through the files in docker-entrypoint-initdb.d but that gets me a myssql.sock error probably because the database is already shutdown from the dockerfile RUN command.
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
I think your problem is this:
You want to get your schema trough mount volume, but you don't see in the database, right ?
First of all, if your intention is to use mariadb you can use the MariaDB Official Docker Image.
So, with this way you avoid to use redhat images or custom building.
Second, You're copying this SQL, but you don't run a mysql import o dump or something like that. So, what you can do is to put in your docker-compose a command (for example):
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
command: "mysql -u username -p database_name < file.sql"
But is not the best way.
On the other hand, you can follow MariaDB documentation, set a mount volume, and in the first run, import data. And with the volume you don't have to run again imports.
This is the mount that you have to put:
/my/own/datadir:/var/lib/mysql
Obviously, you can mount your sql file in a /tmp/ folder and thats it, and with docker exec execute sh and run it from inside.
Here's my docker-compose.yml file, adapted from here:
version: '3.1'
services:
mysql:
image: mariadb
environment:
MYSQL_DATABASE: drupal8
MYSQL_USER: drupal8
MYSQL_PASSWORD: drupal8
MYSQL_ROOT_PASSWORD: admin
volumes:
- /var/lib/mysql
restart: always
drupal:
image: drupal:8.2-apache
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
links:
- mysql
Now on running this and opening up localhost:8080 in my browser, I'm presented with Drupal's configuration setup, which I duly follow and presto, my first Drupal page is created. What I ultimately need to do is:
Save the configuration somehow, so that the settings persist
Be able to push these two containers to a single repository in Docker Hub
The end goal is to be able to issue docker run myDockerHubUsername/myRepo, which would pull these two containers and Drupal would be preconfigured.
Your docker-compose is already saving all the data/configurations you made. Even you destroy the containers, the data persists.
You need to keep your mounted volumes!
If you want to run these somewhere else. You need to always carry your data/volume. Remember to check or change the paths.
For 2nd, it is not advisable to keep multiple images in one image. If you still want. You need to prepare a Dockerfile, and prepare a single image out of that.
A few days ago, I installed docker on my new laptop. I've used docker for a while and know the basics pretty well. Yet, for some reason I keep bumping into the same problem and I hope someone here can help me.
After installing the Docker Toolbox on my Windows 10 Home laptop, I tried to run some images that I've created using a docker-compose.yml. Since my user directory on windows has my exact name in it (C:/Users/Nick van der Meij) and that name contains spaces, I added an extra shared folder from C:/code to /mnt/code on the Docker Host (this works). I've used this guide to do so
However, when I try to run my docker-compose.yml (included below), I get the following error:
ERROR: for php Cannot create container for service php: create \mnt\code\basic_php\api: "\\mnt\\code\\basic_php\\api" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed
[31mERROR[0m: Encountered errors while bringing up the project.
As far as I see it, everything seems to be correct according to the official docker docs about volumes. I've spend many hours trying to fix this and tried out multiple "formats" for the volumes tag, yet without any success.
Does anyone know what the problem might be?
Thanks in Advance!
docker-compose.yml
version: '2'
services:
mysql:
image: mysql:5.7
ports:
- 3306
volumes:
- /var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: database
nginx:
image: nginx:1.10.2
ports:
- 80:80
- 443:443
restart: always
volumes:
- /mnt/code/basic_php/nginx/conf:/etc/nginx/conf.d
- /mnt/code/basic_php/api:/code/api
- /mnt/code/basic_php/nginx:/code/nginx
links:
- php
- site
depends_on:
- php
- site
php:
build: php
expose:
- 9000
restart: always
volumes:
- /mnt/code/basic_php/php/conf/php.ini:/usr/local/etc/php/conf.d/custom.ini
- /mnt/code/basic_php/api:/code/api
links:
- mysql
site:
restart: always
build: site
ports:
- 80
container_name: site
After a few hours searching the web, I finally found what i was looking for. Like Wolfgang Blessen said in the comments below my question, the problem was indeed a Windows Path problem.
If you dont want docker to automatically convert paths windows to unix, you need to add the COMPOSE_CONVERT_WINDOWS_PATHS environment variable with a value of 0 like explained here: link
Use git bash and do
export COMPOSE_CONVERT_WINDOWS_PATHS=1
then execute
docker-compose up -d
Or simply use double backslash
winpty docker run -it -v C:\\path\\to\\folder:/mount