Using Volumerize to backup my docker volumes with scp ? - docker

I have a couple of docker volumes i want to backup onto another server, using scp/sftp. I don't know how to deal with that so i decided to have a look at blacklabelops/volumerize GitHub project.
This tool is based on the command line tool Duplicity. Dockerized and Parameterized for easier use and configuration. Tutorial is dealing with a jenkins docker, but i don't understand how to mention i'm want to use a pem file.
I've tried different solution (adding -i option to scp command line) without any success at the moment.
Duplicity man page is mentioning the use of cacert pem files (--ssl-cacert-file option), but i suppose i have to create an env variable when running the docker (with -e option), and i don't know which name to use.
Here what i have so far, can someone please point me in the right direction ?
docker run -d --name volumerize -v jenkins_volume:/source:ro -v backup_volume:/backup     -e "VOLUMERIZE_SOURCE=/source"  -e "VOLUMERIZE_TARGET=scp://me#serverip/home/backup" blacklabelops/volumerize

The option --ssl-cacert-file is only for host verification not for authentication.
I have found this example on how to add pem files inside an scp command:
scp -i /path/to/your/.pemkey -r /copy/from/path user#server:/copy/to/path
The parameter -i /path/to/your/.pemkey can be passed in blacklabelops/volumerize
with the env variable `VOLUMERIZE_DUPLICITY_OPTIONS``
Example:
$ docker run -d \
--name volumerize \
-v jenkins_volume:/source:ro \
-v backup_volume:/backup \
-e "VOLUMERIZE_SOURCE=/source" \
-e "VOLUMERIZE_TARGET=scp:///backup" \
-e 'VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /path/to/your/.pemkey"' \
blacklabelops/volumerize

Related

Docker openapi client generator can't find "spec file"

I have generated a openapi json file, and I wish to create a typescript client using docker.
I have tried to do something similar to what is on the openapi generator site (https://openapi-generator.tech/ - scroll down to docker part), but it doesn't work.
Command from site:
docker run --rm \
-v $PWD:/local openapitools/openapi-generator-cli generate \
-i /local/petstore.yaml \
-g go \
-o /local/out/go
What I have tried:
docker run --rm -v \
$PWD:/local openapitools/openapi-generator-cli generate -i ./openapi.json \
-g typescript-axios
No matter what I do, there is always a problem with the ./openapi.json file. The error which occours:
[error] The spec file is not found: ./openapi.json
[error] Check the path of the OpenAPI spec and try again.
I have tried the things below:
-i ~/compass_backend/openapi.json
-i openapi.json
-i ./openapi.json
-i $PWD:/openapi.json
cat openapi.json | docker run .... (error, -i is required)
I am out of ideas. The error is always the same. What am I doing wrong?
I was able to solve the problem by switching from bash to powershell. Docker uses windows path notation and I was trying to use bash notation. If you type pwd in bash you get this:
/c/Users/aniemirka/compass_backend
And if you type pwd in powershell you get this:
C:\Users\aniemirka\compass_backend
So docker was trying to mount a volume to /c/Users/aniemirka/compass_backend\local, and it couldn't read it because it is not windows notation, so the volume didn't exist.

How to update soanr.properties of docker based installation

I am pretty new to Docker. Now I want to run a docker version of Sonarqube and upate the property file (sonar.properties) inorder to point my databsae to mysql rather than the default H2.
I am able to run the image with default configuration and even performed a scan on it. While following the instructions in its official docker page (Sonarqube docker documentation), I am not able to proceed further from the second point under the "First Installation" heading. Second point is as follows
Initialize SONARQUBE_HOME folder tree with --init. This will initialize the default configuration, copy embedded plugins, and prepare the data folder:
$ docker run --rm \
-v $SONARQUBE_HOME/conf:/opt/sonarqube/conf \
-v $SONARQUBE_HOME/extensions:/opt/sonarqube/extensions \
-v $SONARQUBE_HOME/data:/opt/sonarqube/data \
sonarqube --init
which I believe will help me to have a custom configurations folder. Following error shows up while running this command.
renju#renju-pc:~$ sudo docker run --rm -v
$SONARQUBE_HOME/conf:/opt/sonarqube/conf -v
$SONARQUBE_HOME/extensions:/opt/sonarqube/extensions -v
$SONARQUBE_HOME/data:/opt/sonarqube/data sonarqube --init tail: cannot
open './logs/es.log' for reading: No such file or directory
01:33:11.950 [main] WARN
org.sonar.application.config.AppSettingsLoaderImpl - Configuration
file not found: /opt/sonarqube/conf/sonar.properties Exception in
thread "main" java.lang.IllegalArgumentException: Command-line
argument must start with -D, for example -Dsonar.jdbc.username=sonar.
Got: --init at
org.sonar.application.config.CommandLineParser.argumentsToProperties(CommandLineParser.java:56)
at
org.sonar.application.config.CommandLineParser.parseArguments(CommandLineParser.java:37)
at
org.sonar.application.config.AppSettingsLoaderImpl.load(AppSettingsLoaderImpl.java:66)
at org.sonar.application.App.start(App.java:51) at
org.sonar.application.App.main(App.java:98)
My asumption is that it is because of the unavailability of the folder "opt/sonarqube/conf".
Why is that folder missing? As per doc,
Use bind-mounted folders
The images contain the SonarQube installation at /opt/sonarqube. You need to use bind-mounted folders to override selected files or directories :
/opt/sonarqube/conf: configuration files, such as sonar.properties
/opt/sonarqube/data: data files, such as the embedded H2 database and Elasticsearch indexes
/opt/sonarqube/logs: contains SonarQube logs about access, web process, CE process, Elasticsearch logs
/opt/sonarqube/extensions: plugins, such as language analyzers
Am I missing any intermediate steps here?
I work on Ubuntu 19.10.
You are not missing anything.
Current documentation in the Docker Hub is for the Sonarqube 8. They are working on releasing documentation for the Sonarqube7.
Please check the below link: https://github.com/SonarSource/docker-sonarqube/issues/340#issuecomment-553397995
Please follow the below steps.
Create volumes sonarqube_conf, sonarqube_data, sonarqube_logs, and sonarqube_extensions and start the image with the following command. This will populate all the volumes (copying default plugins, create the Elasticsearch data folder, create the sonar.properties configuration file). Watch the logs, and, once the container is properly started, you can force-exit (ctrl+c) and proceed to the next step.
$ docker run --rm \
-p 9000:9000 \
-v sonarqube_conf:/opt/sonarqube/conf \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
-v sonarqube_data:/opt/sonarqube/data \
%%IMAGE%%
Run the image with your JDBC username and password
$ docker run -d --name sonarqube \
-p 9000:9000 \
-e sonar.jdbc.username=sonar \
-e sonar.jdbc.password=sonar \
-v sonarqube_conf:/opt/sonarqube/conf \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
-v sonarqube_data:/opt/sonarqube/data \
%%IMAGE%%

How to set docker env file that inside the image

i am a totally docker newb, so sorry for that
i have stand-alone docker image (some node app),
that i want to run in different environments.
i want to set up the env file with run RUN --env-file <path>
How ever, i want to use the env files that inside the image (so i can use different files per env),
and not on server.
so would be the path inside image.
is there any way to do so?
perhaps like "cp" (docker cp [OPTIONS] CONTAINER:<path>)
but doesn't seem to work.
what the best practice here?
am i making sense?
Thanks!!
Docker bind mounts are a fairly effective way to inject configuration files like this into a running container. I would not try to describe every possible configuration in your built image; instead, let that be configuration that's pushed in from the host.
Pick some single specific file to hold the configuration. For the sake of argument, let's say it's /usr/src/app/env. Set up your application however it's built to read that file at startup time. Either make sure the application can still start up if the file is missing, or build your image with some file there with reasonable default settings.
Now when you run your container, it will always read settings from that known file; but, you can specify a host file that will be there:
docker run -v $PWD/env.development:/usr/src/app/env myimage
Now you can locally have an env.development that specifies extended logging and a local database, and an env.production with minimal logging and pointing at your production database. If you set up a third environment (say a shared test database with some known data in it) you can just run the container with this new configuration, without rebuilding it.
Following is the command to run docker
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Example
docker run --name test -it debian
focus on following switch
--env , -e Set environment variables
--env-file You can pass environment variables to your containers with the -e flag.
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
In case, you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each
line to be in the VAR=VAL format, mimicking the argument passed to
--env. Comment lines need only be prefixed with #

customizing docker-compose.yml for images from docker store

i'm new to docker and i'm currently experimenting using https://github.com/diginc/docker-pi-hole
It's pretty straightforward if i just imagine it as a light-weight VM, i've pulled the image using docker pull diginc/pi-hole and manually started the image by doing
docker run -d \
--name pi-hole \
-p 53:53/tcp
-p 53:53/udp
-p 8053:80 \
-e TZ=SG \
-v "/Users/me/pihole/:/etc/pihole/" \
-v "/Users/me/dnsmasq.d/:/etc/dnsmasq.d/" \
-e ServerIP="192.168.0.25" \
--restart=always \
diginc/pi-hole:alpine
everything works well, but in their documentation, it's mentioned to use docker_run.sh
No idea where/how to execute this, and also the authors also suggested using docker-compose, but after pulling the project, i can't find where's the actual directory.
Where is the directory?
What's the typical way of customizing the compose.yml
How to run after i've done my customization?
The docker-run.sh is on the site
https://github.com/diginc/docker-pi-hole/blob/master/docker_run.sh
Just use it

How to pass docker options --mac-address, -v etc in kubernetes?

I have installed a 50 node Kubernetes cluster in our lab and am beginning to test it. The problem I am facing is that I cannot find a way to pass the docker options needed to run the docker container in Kubernetes. I have looked at kubectl as well as the GUI. An example docker run command line is below:
sudo docker run -it --mac-address=$MAC_ADDRESS \
-e DISPLAY=$DISPLAY -e UID=$UID -e XAUTHORITY=$XAUTHORITY \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-v /mnt/lab:/mnt/lab -v /mnt/stor01:/mnt/stor01 \
-v /mnt/stor02:/mnt/stor02 -v /mnt/stor03:/mnt/stor03 \
-v /mnt/scratch01:/mnt/scratch01 \
-v /mnt/scratch02:/mnt/scratch02 \
-v /mnt/scratch03:/mnt/scratch03 \
matlabpipeline $ARGS`
My first question is whether we can pass these docker options or not ? If there is a way to pass these options, how do I do this ?
Thanks...
I looked into this as well and from the sounds of it this is an unsupported use case for Kubernetes. Applying a specific MAC address to a docker container seems to conflict with the overall design goal of easily bringing up replica instances. There are a few workarounds suggested on this Reddit thread. In particular the OP finally decides the following...
I ended up adding the NET_ADMIN capability and changing the MAC to an environment variable with "ip link" in my entrypoint.sh.

Resources