Pycharm docker run configuration do not accept environment variables - docker

I am trying to set up a docker run configuration in Pycharm, i am pretty new to this functionality in pycharm, and i can't get it working.
In docker I would run the container with the following command
docker build -t test-container . && docker run --name container-pycharm -t -i --env-file .env -v $(pwd):/srv/app -p 8080:8080 --rm test-container ./serve-app
I set up this in pycharm, by adding the following line
--rm --env-file .env -i -t -p 8080:8080 -v $(pwd):/srv/app
to command line options section in the relevant docker Run/Debug Configuration Pycharm window. Unfortunately I get
Failed to deploy 'container-pycharm Dockerfile: Dockerfile': com.github.dockerjava.api.exception.BadRequestException: {"message":"create $(pwd): \"$(pwd)\" includes invalid characters for a local volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a host directory, use absolute path"}
Clearly, I cant use $(pwd) in my command line options, any idea on how to solve this in pycharm?

Pycharm doesn't invoke docker directly via the command you see in the command preview, it goes through its custom parser, currently they haven't implemented the feature to read envs. Thus "If you intended to pass a host directory, use absolute path"
And -v is not officially supported as command line options in the current version. Ref
Use Bind mounts instead

Related

racadm - ERROR: Specified file <filename> does not exist

I'm trying to run racadm both in Windows Powershell using the official utility and on my Mac using this Docker container. In both instances, I can pull the RAC details, so I know my login and password are valid, but when I try to perform an sslkeyupload, I get the following error:
ERROR: Specified file file.pem does not exist.
The permissions on the file, at least on my Mac, are wide open (chmod 777) and are in the same directory I'm trying to run the script in:
docker run stackbot/racadm -r 10.10.1.4 -u root -p calvin sslkeyupload -t 1 -f ./file.pem
Anyone see anything obvious I may be doing wrong?
You're running the command inside a Docker container. It has no visibility to your local filesystem unless you explicitly expose your directory inside the container using the -v command line option:
docker run -v $PWD:$PWD -w $PWD ...
The -v option creates a bind mount, and the -w option sets the working directory.

Docker: Error parsing configuration files, file does not exist

I am following a tutorial and using sqlc in my project. However, it's weird that I seem to mount an empty volume. After checking another post mounting the host directory, I found docker creates another empty folder, confirming that I did something wrong about it. Docker documentation doesn't help resolve this issue. Currently, my command with bash terminal:
docker run --rm -v $(pwd)://src -w //src kjconroy/sqlc init
docker run --rm -v $(pwd)://src -w //src kjconroy/sqlc generate
The first command runs successfully but creates another empty folder. The built container is running, and it's path is: \\wsl$\docker-desktop-data\data\docker\volumes on my Windows 10. However, the folder structure is different from the tutorial when I download the desktop docker, so I'll add extra information about how I construct the setting. The construction is using Make with docker:
postgres:
docker run --name postgreslatest -p 5432:5432 -e POSTGRES_USER=root -e POSTGRES_PASSWORD=secret -d postgres
createdb:
docker exec -it postgreslatest createdb --username=root --owner=root simple_bank
dropdb:
docker exec -it postgreslatest dropdb simple_bank
migrateup:
migrate -path db/migration -database "postgresql://root:secret#localhost:5432/simple_bank?sslmode=disable" -verbose up
migratedown:
migrate -path db/migration -database "postgresql://root:secret#localhost:5432/simple_bank?sslmode=disable" -verbose down
.PHONY: postgres createdb dropdb migrateup migratedown
Any help is appreciated.
I got it working. First of all, I still have no idea why bash command cannot correctly locate the sqlc.yaml file. However, under Windows 10 OS, I succeeded locate and generating files with the command provided by docs.
The command is: docker run --rm -v "%cd%:/src" -w /src kjconroy/sqlc generate using ONLY CMD and the command also works combined with the MakeFile.

How to convert docker command to docker-compose

I wanted to convert docker run command to docker compose. Can you plz give some clue.
docker run -dit -h nginx --name=nginx --net=internal -p 8085:80 --restart=always -v /default.conf:/etc/nginx/conf.d/default.conf nginx:latest
Use docker run --help to understand what each of the used options does. Then proceed to the Compose file reference and find there how it is configured with YAML.
Note that some command line arguments have no equivalents in compose. That is either because they are not yet implemented or because they are also used as command line options. An example of the latter is -d, which run the container in detached mode. Its equivalent for docker-compose is also -d (e.g docker-compose up -d).

Docker bench - How to persist logs or supply log file argument

I am following the tutorial to run docker bench from its GitHub page
I am executing it as follows:
C:/ docker ps
<lists running containers>
C:/ docker run -it --net host --pid host --userns host --cap-add audit_control -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST -v /etc:/etc -v /usr/bin/docker-containerd:/usr/bin/docker-containerd -v /usr/bin/docker-runc:/usr/bin/docker-runc -v /usr/lib/systemd:/usr/lib/systemd -v /var/lib:/var/lib -v /var/run/docker.sock:/var/run/docker.sock --label docker_bench_security docker/docker-bench-security
The docker bench command works fine, and I see the colored output with my PASS/WARNs and my total score out of the total checks at the bottom.
The problem is that docker bench says "By default the Docker Bench for Security script will run all available CIS tests and produce logs in the current directory named docker-bench-security.sh.log.json and docker-bench-security.sh.log"
However in my root (C:) where I executed the commands, I do not see these two files.
I have also tried running the same docker bench command above but with the optional log argument
docker run docker/docker-bench-security..... -l logs.txt
But I do not see any file get created (and if I premake the file it is not populated).
Any ideas on how I can capture my docker bench output in a file?
The file is likely created inside the container.
As you noticed you can set its path using the -l path option,
but if you want the file to appear on the host you need to mount
that path as a volume.
In other words you need to run the following command:
docker run (...) -v /path/to/my-logs:/tmp/my-logs docker-bench-security (...) -l /tmp/my-logs/log.txt
--where (...) are the existing parameters that you use.

Docker how to pass a relative path as an argument

I would like to run this command:
docker run docker-mup deploy --config .deploy/mup.js
where docker-mup is the name the image, and deploy, --config, .deploy/mup.js are arguments
My question: how to mount a volume such that .deploy/mup.js is understood as the relative path on the host from where the docker run command is run?
I tried different things with VOLUME but it seems that VOLUME does the contrary: it exposes a container directory to the host.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
Using -v to expose your current directory is the only way to make that .deploy/mup.js file inside your container, unless you are baking it into the image itself using a COPY directive in your Dockerfile.
Using the -v option to map a host directory might look something like this:
docker run \
-v $PWD/.deploy:/data/.deploy \
-w /data \
docker-mup deploy --config .deploy/mup.js
This would map (using -v ...) the $PWD/.deploy directory onto /data/.deploy in your container, set the current working directory to /data (using -w ...), and then run deploy --config .deploy/mup.js.
Windows - Powershell
If you're inside the directory you want to bind mount, use ${pwd}:
docker run -it --rm -d -p 8080:80 --name web -v ${pwd}:/usr/share/nginx/html nginx
or $pwd/. (forward slash dot):
docker run -it --rm -d -p 8080:80 --name web -v $pwd/.:/usr/share/nginx/html nginx
Just $pwd will cause an error:
docker run -it --rm -d -p 8080:80 --name web -v $pwd:/usr/share/nginx/html nginx
Variable reference is not valid. ':' was not followed by a valid variable name character. Consider using ${} to
delimit the name
Mounting a subdirectory underneath your current location, e.g. "site-content", $pwd/ + subdir is fine:
docker run -it --rm -d -p 8080:80 --name web -v $pwd/site-content:/usr/share/nginx/html nginx
In my case there was no need for $pwd, and using the standard current folder notation . was enough. For reference, I used docker-compose.yml and ran docker-compose up.
Here is a relevant part of docker-compose.yml.
volumes:
- '.\logs\:/data'

Resources