docker invalid characters for local volume name - docker

A few days ago, I installed docker on my new laptop. I've used docker for a while and know the basics pretty well. Yet, for some reason I keep bumping into the same problem and I hope someone here can help me.
After installing the Docker Toolbox on my Windows 10 Home laptop, I tried to run some images that I've created using a docker-compose.yml. Since my user directory on windows has my exact name in it (C:/Users/Nick van der Meij) and that name contains spaces, I added an extra shared folder from C:/code to /mnt/code on the Docker Host (this works). I've used this guide to do so
However, when I try to run my docker-compose.yml (included below), I get the following error:
ERROR: for php Cannot create container for service php: create \mnt\code\basic_php\api: "\\mnt\\code\\basic_php\\api" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed
[31mERROR[0m: Encountered errors while bringing up the project.
As far as I see it, everything seems to be correct according to the official docker docs about volumes. I've spend many hours trying to fix this and tried out multiple "formats" for the volumes tag, yet without any success.
Does anyone know what the problem might be?
Thanks in Advance!
docker-compose.yml
version: '2'
services:
mysql:
image: mysql:5.7
ports:
- 3306
volumes:
- /var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_DATABASE: database
nginx:
image: nginx:1.10.2
ports:
- 80:80
- 443:443
restart: always
volumes:
- /mnt/code/basic_php/nginx/conf:/etc/nginx/conf.d
- /mnt/code/basic_php/api:/code/api
- /mnt/code/basic_php/nginx:/code/nginx
links:
- php
- site
depends_on:
- php
- site
php:
build: php
expose:
- 9000
restart: always
volumes:
- /mnt/code/basic_php/php/conf/php.ini:/usr/local/etc/php/conf.d/custom.ini
- /mnt/code/basic_php/api:/code/api
links:
- mysql
site:
restart: always
build: site
ports:
- 80
container_name: site

After a few hours searching the web, I finally found what i was looking for. Like Wolfgang Blessen said in the comments below my question, the problem was indeed a Windows Path problem.
If you dont want docker to automatically convert paths windows to unix, you need to add the COMPOSE_CONVERT_WINDOWS_PATHS environment variable with a value of 0 like explained here: link

Use git bash and do
export COMPOSE_CONVERT_WINDOWS_PATHS=1
then execute
docker-compose up -d

Or simply use double backslash
winpty docker run -it -v C:\\path\\to\\folder:/mount

Related

How to bind folders inside docker containers?

I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.

Cannot exec into container using GitBash when using Docker Compose

I'm new to Docker Compose, but have used Docker for years. The screen shot below is of PowerShell and of GitBash. If I run containers without docker-compose I can docker exec -it <container_ref> /bin/bash with no problems from either of these shells.
However, when running using docker-compose up both shells give no error when attempting to use docker-compose exec. They both just hang a few seconds and return to prompt.
Lastly, for some reason I do get an error in GitBash when using what I know: docker exec.... I've used this for years so I'm perplexed and posting a question. What does Docker Compose do that messes with GitBash docker ability, but not with PowerShell? And, why the hang when using docker-compose exec..., but no error?
I am using tty: true in the docker-compose.yml, but that honestly doesn't seem to make a difference. Not to throw a bunch of questions in one post, but whatever is going on could it also be the reason I can't hit my web server in the browser only when using Docker Compose to run it?
version: '3.8'
volumes:
pgdata:
external: true
services:
db:
image: postgres
container_name: trac-db
tty: true
restart: 'unless-stopped'
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: iol
volumes:
- pgdata:/var/lib/postgresql/data
network_mode: 'host'
expose:
- 5432
web:
image: lindben/trac-server
container_name: trac-server
tty: true
restart: 'unless-stopped'
environment:
ADDRESS: localhost
PORT: 3000
NODE_ENV: development
depends_on:
- db
network_mode: 'host'
privileged: true
expose:
- 1234
- 3000
```
I'm gonna be assuming you're using Docker for Desktop and so the reason you can docker exec just fine using powershell is because for windows docker is a native program\command and for GitBash which is based on bash a linux shell (bash = Bourne-Again SHell) not so much.
so when using a windows command that needs a tty you need some sort of "adapter" like winpty for example to bridge the gap between docker's interface and GitBash's one.
Here's a more detailed explanation on winpty
putting all of this aside, if trying to only use the compose options it maybe better for you to advise this question
Now, regarding your web service issue, I think that you're not actually publicly exposing your application using the expose tag. take a look at the docker-compose
expose reference. what you need is to add a "ports" tag like so as referenced here:
db:
ports:
- "5432:5432"
web:
ports:
- "1234:1234"
- "3000:3000"
Hope this solves your pickle ;)

Having permissions issues with Grafana 7.3.0 on Docker

I'm using docker-compose to create a Docker network of containers with InfluxDB, a python script and Grafana to harvest and visualize response codes, query times & other stats of different websites.
I am using Grafana image 7.3.0 with a volume,
I have modified the paths environment variables so I'll have to use only one volume to save all the data.
When I start the Grafana container it logs:
GF_PATHS_CONFIG='/etc/grafana/grafana.ini' is not readable.
GF_PATHS_DATA='/etc/grafana/data' is not writable.
GF_PATHS_HOME='/etc/grafana/home' is not readable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-
docker-container-to-5-1-or-later
mkdir: can't create directory '/etc/grafana/plugins': Permission denied
But here is the thing, I'm not migrating from below 5.1 I'm not even migrating at all!
So I tried to follow their instruction to change permissions of files but it did not worked.
I tried to set the user id in the docker-compose but it did not help.
(as-said in the docs 472 == post 5.1, 104 == pre 5.1 but both did not worked)
I can't even change permissions manually (which is not a satisfying solution btw) because the container is crashing.
I normally don't ask questions because they already have answers but I've seen no one with this trouble using 7.3.0 so I guess it's my time to shine Haha.
Here is my docker-compose.yml (only the grafana part)
version: '3.3'
services:
grafana:
image: grafana/grafana:7.3.0
ports:
- '3000:3000'
volumes:
- './grafana:/etc/grafana'
networks:
- db-to-grafana
depends_on:
- db
- influxdb_cli
environment:
- GF_PATHS_CONFIG=/etc/grafana/grafana.ini
- GF_PATHS_DATA=/etc/grafana/data
- GF_PATHS_HOME=/etc/grafana/home
- GF_PATHS_LOGS=/etc/grafana/logs
- GF_PATHS_PLUGINS=/etc/grafana/plugins
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
user: "472"
Thank you very much for your potential help!
Edit : I've been wondering if there is a grafana user in latest version (8.0), I think that build a home dir for grafana using a Dockerfile could be the solution I just need to find that user.
I'm here to close this subject.
So this was kind of a noob mistake but I could not have known.
The problem came from the fact that Grafana won't chown and chmod the volume folder. The error does not occures but it won't work because it does not save the data.
The solution was to remove the env variables and changing permissions of the local './grafana' folder wich contained the volume.
So I did
chown -R <personal local user> /path/to/local/volume/folder && \
chmod -R 777 /path/to/local/volume/folder
And now it works normally
Here is my new docker compose
docker-compose.yml
grafana:
image: grafana/grafana
ports:
- '3000:3000'
volumes:
- './grafana:/var/lib/grafana'
networks:
- db-to-grafana
depends_on:
- db
- influxdb_cli
Thanks everybody four your help !
Just replace your user's id that you will get on the following command:
$ id -u
Im running 'id -u' in my terminal and getting '1000'. SO, I replaced user: "xxxx" to user: "1000" in docker-compose.yml
version: '3.3'
services:
grafana:
image: grafana/grafana:7.3.0
ports:
- '3000:3000'
volumes:
- './grafana:/etc/grafana'
networks:
- db-to-grafana
depends_on:
- db
- influxdb_cli
environment:
- GF_PATHS_CONFIG=/etc/grafana/grafana.ini
- GF_PATHS_DATA=/etc/grafana/data
- GF_PATHS_HOME=/etc/grafana/home
- GF_PATHS_LOGS=/etc/grafana/logs
- GF_PATHS_PLUGINS=/etc/grafana/plugins
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
user: "1000"

where can I find the data of ghost, like: posts, users, etc

more-question-detail
so, where can I find the data of ghost-blog?
If we take a look at the description for ghost image on dockerhub, you can find this:
This Docker image for Ghost uses SQLite. There is nothing special to configure.
Moreover, if we open the latest docker file (which at the moment is 2.9.1, 2.9, 2, latest (2/debian/Dockerfile) ), whe can see the following line:
gosu node yarn add "sqlite3#$sqlite3Version" --force --build-from-source; \
This installs an sqlite database inside the ghost container. This database will contain all your posts, users, etc.
same image ghost2.9
the guide from docker-ghost
run with: docker-compose -f stack.yml up
then I can get data from mysql docker container.
why cannot by this way question-detail, which confused me~ /(愒o愒)/~~
version: '3.1'
services:
ghost:
image: ghost:2.9
ports:
- 2368:2368
volumes:
- $PWD/content:/var/lib/ghost/content
environment:
# see https://docs.ghost.org/docs/config#section-running-ghost-with-config-env-variables
database__client: mysql
database__connection__host: localhost
database__connection__user: root
database__connection__password: anywhere
database__connection__database: ghost
db:
image: mysql:5.7
volumes:
- $PWD/data:/var/lib/mysql
ports:
- 3307:3306
environment:
MYSQL_ROOT_PASSWORD: anywhere

How to use ipaddreses instead of container names in docker compse networking

I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.

Resources