From within my docker pgadmin container, I want to access a postgresql backup file located in my windows10 OS.
So I'm trying to set up a shared directory.
Running this command works fine. Directory is linked to the container.
docker run --name=windows10 -d -v C:\Users\johndoe:/windows10 -p 5554:80 dpage/pgadmin4 -e PGADMIN_DEFAULT_EMAIL=john#doe.com -e PGADMIN_DEFAULT_PASSWORD=whatever
However, the directory won't mount because it's giving this error log on startup:
You need to specify PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD environment variables
What is this sorcery??
Move the environment variables to before the image name
docker run --name=windows10 -d -v C:\Users\johndoe:/windows10 -p 5554:80 -e PGADMIN_DEFAULT_EMAIL=john#doe.com -e PGADMIN_DEFAULT_PASSWORD=whatever dpage/pgadmin4
-e is an option and must be specified between run and IMAGE (see https://docs.docker.com/engine/reference/commandline/run/)
Related
I hope someone can help me to locate the issue. I created a SQL Server 2019 container using this code:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v /SqlDockerVol/userdatabase:/userdatabase -v /SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates
The problem I am having is the container not showing the files I saved in the /sqlbackups folder.
I am using Ubuntu 20.04.
I logged into the SQL19 container like this:
docker exec -it SQL19 /bin/bash
then issued ls sqlbackups to confirm.
Do I need to set any permission on the host folder. I am not familiar with Linux.
Thanks
I suspect you need to pass absolute path to your folder:
/SqlDockerVol/userdatabase
is it full absolute path?
If it is relative change it to:
$(pwd)/SqlDockerVol/userdatabase
Please check this Docker shared folder with Linux
And technically you need something like:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v $(pwd)/SqlDockerVol/userdatabase:/userdatabase -v $(pwd)/SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates
Goal
Action: Run command from my local machine
Result: Docker image deployed on cloud instance
Approach
For remote deployment, I am using gcloud commands.
The command below is working but the only problem is that it is not picking environment variables file i.e. .env. I have this .env file placed in the working directory.
Command:
gcloud beta compute ssh --quiet --zone "us-west1-b" "devop-beta-persistent-2" --project "my-project" --command 'sudo docker run -p 8080:8080 -p 8443:8443 -p 50000:50000 -v ~/jenkins_data:/var/jenkins_home -v $FILE_PATH/jenkins.yaml:/var/configurations/jenkins_casc.yml --name jenkins-devkit --env-file $PWD/.env $JENKINS_IMAGE:latest'
Error: docker: open /.env: no such file or directory.
What I already tried
I have tried setting path to:
.env
/full/path/to/.env
$PWD/.env
but still getting the same error.
If I run this command on my local machine, it works fine i.e. picking up the .env file.
sudo docker run -p 8080:8080 -p 8443:8443 -p 50000:50000 -v ~/jenkins_data:/var/jenkins_home -v $FILES_PATH/jenkins.yaml:/var/configurations/jenkins_casc.yml --name jenkins-devkit --env-file $PWD/.env $JENKINS_IMAGE:latest
Can any one suggest the possible solution?
I used to have a Docker volume for mariadb, which contained my database. As part of migration from Docker to Podman, I am trying to migrate the db volume as well. The way I tried this is as follows:
1- Copy the content of the named docker volume (/var/lib/docker/volumes/mydb_vol) to a new directory I want to use for Podman volumes (/opt/volumes/mydb_vol)
2- Run Podman run:
podman run --name mariadb-service -v /opt/volumes/mydb_vol:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress --net host mariadb
This successfully creates a container and initializes the database with the given environment variables. The problem is that the database in the container is empty! I tried changing host mounted volume to /opt/volumes/mydb_vol/_data and container volume to /var/lib/mysql simultaneously and one at a time. None of them worked.
As a matter of fact, when I "podman execute -ti container_digest bash" inside the resulting container, I can see that the tables have been mounted successfully in the specified container directories, but mysql shell says the database is empty!
Any idea how to properly migrate Docker volumes to Podman? Is this even possible?
I solved it by not treating the directory as a docker volume, but instead mounting it into the container:
podman run \
--name mariadb-service \
--mount type=bind,source=/opt/volumes/mydb_vol/data,destination=/var/lib/mysql \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=mysecret \
-e MYSQL_DATABASE=wordpress \
mariadb
I would like to run this command:
docker run docker-mup deploy --config .deploy/mup.js
where docker-mup is the name the image, and deploy, --config, .deploy/mup.js are arguments
My question: how to mount a volume such that .deploy/mup.js is understood as the relative path on the host from where the docker run command is run?
I tried different things with VOLUME but it seems that VOLUME does the contrary: it exposes a container directory to the host.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
Using -v to expose your current directory is the only way to make that .deploy/mup.js file inside your container, unless you are baking it into the image itself using a COPY directive in your Dockerfile.
Using the -v option to map a host directory might look something like this:
docker run \
-v $PWD/.deploy:/data/.deploy \
-w /data \
docker-mup deploy --config .deploy/mup.js
This would map (using -v ...) the $PWD/.deploy directory onto /data/.deploy in your container, set the current working directory to /data (using -w ...), and then run deploy --config .deploy/mup.js.
Windows - Powershell
If you're inside the directory you want to bind mount, use ${pwd}:
docker run -it --rm -d -p 8080:80 --name web -v ${pwd}:/usr/share/nginx/html nginx
or $pwd/. (forward slash dot):
docker run -it --rm -d -p 8080:80 --name web -v $pwd/.:/usr/share/nginx/html nginx
Just $pwd will cause an error:
docker run -it --rm -d -p 8080:80 --name web -v $pwd:/usr/share/nginx/html nginx
Variable reference is not valid. ':' was not followed by a valid variable name character. Consider using ${} to
delimit the name
Mounting a subdirectory underneath your current location, e.g. "site-content", $pwd/ + subdir is fine:
docker run -it --rm -d -p 8080:80 --name web -v $pwd/site-content:/usr/share/nginx/html nginx
In my case there was no need for $pwd, and using the standard current folder notation . was enough. For reference, I used docker-compose.yml and ran docker-compose up.
Here is a relevant part of docker-compose.yml.
volumes:
- '.\logs\:/data'
I am new with Docker. I have a small Java application that I am trying to run inside Docker. I have created a Dockerfile to build the image.
My application is reading Environment Variables to know which database to connect to.
When running the command
docker run -d -p 80:80 occm -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost"
and then enumerating all the variables using System.getenv, I dont see any of them. So I have added to the Docker file
ENV MYSQL_HOST=localhost
now when I run the container I see this variable, but I see it with the localhost value and not somehost.
What am I doing wrong?
The problem is how you are running your docker image.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So, you are passing -e "..." -e "..." as command and arguments
You need to use -e as [OPTIONS].
$ docker run -d -p 80:80 -e "MYSQL_USER=user" -e "MYSQL_PASSWORD=password" -e "MYSQL_PORT=3306" -e "MYSQL_HOST=somehost" occm