I have generated a openapi json file, and I wish to create a typescript client using docker.
I have tried to do something similar to what is on the openapi generator site (https://openapi-generator.tech/ - scroll down to docker part), but it doesn't work.
Command from site:
docker run --rm \
-v $PWD:/local openapitools/openapi-generator-cli generate \
-i /local/petstore.yaml \
-g go \
-o /local/out/go
What I have tried:
docker run --rm -v \
$PWD:/local openapitools/openapi-generator-cli generate -i ./openapi.json \
-g typescript-axios
No matter what I do, there is always a problem with the ./openapi.json file. The error which occours:
[error] The spec file is not found: ./openapi.json
[error] Check the path of the OpenAPI spec and try again.
I have tried the things below:
-i ~/compass_backend/openapi.json
-i openapi.json
-i ./openapi.json
-i $PWD:/openapi.json
cat openapi.json | docker run .... (error, -i is required)
I am out of ideas. The error is always the same. What am I doing wrong?
I was able to solve the problem by switching from bash to powershell. Docker uses windows path notation and I was trying to use bash notation. If you type pwd in bash you get this:
/c/Users/aniemirka/compass_backend
And if you type pwd in powershell you get this:
C:\Users\aniemirka\compass_backend
So docker was trying to mount a volume to /c/Users/aniemirka/compass_backend\local, and it couldn't read it because it is not windows notation, so the volume didn't exist.
Related
I am new to docker and am experimenting with trying to get firefox GUI up and running. The Dockerfile I have is:
FROM ubuntu:21.10
RUN apt-get update
RUN apt-get install -y firefox
RUN groupadd -g GID <USERNAME>
RUN useradd -d /home/<USERNAME> -s /bin/bash \
-m <USERNAME> -u UID -g GID
USER <USERNAME>
ENV HOME /home/<USERNAME>
CMD /usr/bin/firefox
...where UID is userID and GID is groupID
I then build with:
$> docker build -t gui .
The image build completes successfully. Then I do:
$> docker run -v /tmp/.X11-unix:/tmp/.X11-unix \
-h $HOSTNAME -v $HOME/.Xauthority:/home/$USER/.Xauthority \
-e DISPLAY=$DISPLAY gui
At this point I get the error:
"docker: invalid reference format: repository name must be lowercase."
It's almost as if docker is trying to interpret the X server directory binding and display variable setting as a repository name.
What I am doing wrong?
Thanks in advance...
In fact the error you get tells you that docker cannot understand the reference i.e. the name of the image you are trying to run. As well explained by David Maze. He has proposed to debug it thanks to an echo command.
If you follow his advice, for example in your command if $HOSTNAME is not defined you will see the command with variable expanded and be able to see that it will lead to the observed error if launched. To fix it you can quote your variable (always a good advice to prevent errors) and check that each variable is defined. In your case a missing $HOSTNAME is my best guess.
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -h -v /Users/xxx/.Xauthority:/home/xxx/.Xauthority -e DISPLAY=/mydisplay gui
# docker: invalid reference format: repository name must be lowercase.
David Maze's comment gave the solution:
You're not quoting your environment variable expansions, so if $HOSTNAME is empty or $HOME contains a space you can get errors like this. Try putting the word echo in front of the command and seeing how the shell expands it.
I'm currently following this tutorial to run a model on Docker that was built using the Google Cloud AutoML Vision:
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial
I'm having trouble running the container, specifically running this command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I have my environment variables set up right (did an echo $<env_var>). I do not have a /tmp/mounted_model/0001 directory on my local system. My model path is configured to be the model location on the cloud storage.
${YOUR_MODEL_PATH} must be a directory on the host on which you're running the container.
Your question suggests that you're using the Cloud Storage bucket path but you cannot do this.
Reviewing the tutorial, I think the instructions are confusing.
You are told to:
gsutil cp \
${YOUR_MODEL_PATH} \
${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
So, your command should probably be:
sudo docker run \
--rm \
--interactive --tty \
--name=${CONTAINER_NAME} \
--publish=${PORT}:8501 \
--volume=${YOUR_LOCAL_MODEL_PATH}:/tmp/mounted_model/0001 \
${CPU_DOCKER_GCR_PATH}
NB I added --interactive --tty to make debugging easier; it's optional
NB ${YOUR_LOCAL_MODEL_PATH} not ${YOUR_MODEL_PATH}
NB The command should not be -t ${CPU_DOCKER_GCR_PATH} omit the -t
I've not run through this tutorial.
I am pretty new to Docker. Now I want to run a docker version of Sonarqube and upate the property file (sonar.properties) inorder to point my databsae to mysql rather than the default H2.
I am able to run the image with default configuration and even performed a scan on it. While following the instructions in its official docker page (Sonarqube docker documentation), I am not able to proceed further from the second point under the "First Installation" heading. Second point is as follows
Initialize SONARQUBE_HOME folder tree with --init. This will initialize the default configuration, copy embedded plugins, and prepare the data folder:
$ docker run --rm \
-v $SONARQUBE_HOME/conf:/opt/sonarqube/conf \
-v $SONARQUBE_HOME/extensions:/opt/sonarqube/extensions \
-v $SONARQUBE_HOME/data:/opt/sonarqube/data \
sonarqube --init
which I believe will help me to have a custom configurations folder. Following error shows up while running this command.
renju#renju-pc:~$ sudo docker run --rm -v
$SONARQUBE_HOME/conf:/opt/sonarqube/conf -v
$SONARQUBE_HOME/extensions:/opt/sonarqube/extensions -v
$SONARQUBE_HOME/data:/opt/sonarqube/data sonarqube --init tail: cannot
open './logs/es.log' for reading: No such file or directory
01:33:11.950 [main] WARN
org.sonar.application.config.AppSettingsLoaderImpl - Configuration
file not found: /opt/sonarqube/conf/sonar.properties Exception in
thread "main" java.lang.IllegalArgumentException: Command-line
argument must start with -D, for example -Dsonar.jdbc.username=sonar.
Got: --init at
org.sonar.application.config.CommandLineParser.argumentsToProperties(CommandLineParser.java:56)
at
org.sonar.application.config.CommandLineParser.parseArguments(CommandLineParser.java:37)
at
org.sonar.application.config.AppSettingsLoaderImpl.load(AppSettingsLoaderImpl.java:66)
at org.sonar.application.App.start(App.java:51) at
org.sonar.application.App.main(App.java:98)
My asumption is that it is because of the unavailability of the folder "opt/sonarqube/conf".
Why is that folder missing? As per doc,
Use bind-mounted folders
The images contain the SonarQube installation at /opt/sonarqube. You need to use bind-mounted folders to override selected files or directories :
/opt/sonarqube/conf: configuration files, such as sonar.properties
/opt/sonarqube/data: data files, such as the embedded H2 database and Elasticsearch indexes
/opt/sonarqube/logs: contains SonarQube logs about access, web process, CE process, Elasticsearch logs
/opt/sonarqube/extensions: plugins, such as language analyzers
Am I missing any intermediate steps here?
I work on Ubuntu 19.10.
You are not missing anything.
Current documentation in the Docker Hub is for the Sonarqube 8. They are working on releasing documentation for the Sonarqube7.
Please check the below link: https://github.com/SonarSource/docker-sonarqube/issues/340#issuecomment-553397995
Please follow the below steps.
Create volumes sonarqube_conf, sonarqube_data, sonarqube_logs, and sonarqube_extensions and start the image with the following command. This will populate all the volumes (copying default plugins, create the Elasticsearch data folder, create the sonar.properties configuration file). Watch the logs, and, once the container is properly started, you can force-exit (ctrl+c) and proceed to the next step.
$ docker run --rm \
-p 9000:9000 \
-v sonarqube_conf:/opt/sonarqube/conf \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
-v sonarqube_data:/opt/sonarqube/data \
%%IMAGE%%
Run the image with your JDBC username and password
$ docker run -d --name sonarqube \
-p 9000:9000 \
-e sonar.jdbc.username=sonar \
-e sonar.jdbc.password=sonar \
-v sonarqube_conf:/opt/sonarqube/conf \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
-v sonarqube_data:/opt/sonarqube/data \
%%IMAGE%%
Ive been given a docker container which is run via a bash script. The container should set up a php web app, it then goes on to call other scripts and containers. It seems to work fine for others, but for me its throwing an error.
This is the code
sudo docker run -d \
--name eluci \
-v ./config/eluci.settings:/mnt/eluci.settings \
-v ./config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh
This is the error
docker: Error response from daemon: create ./config/eluci.settings:
"./config/eluci.settings" includes invalid characters for a local
volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to
pass a host directory, use absolute path.
Im running docker on a centos VM using virtualbox on a windows 7 host.
From googling it seems to be something to do with the mount, however I dont want to change it in case the setting it breaks or is relied upon in another docker container. I still have a few more bash scripts to run, which should orchestrate the rest of the build process. As a complete newb to Docker, this has got me stumped.
The command docker run -v /path/to/dir does not accept relative paths, you should provide an absolute path. The command can be re-written as:
sudo docker run -d \
--name eluci \
-v "/$(pwd)/config/eluci.settings:/mnt/eluci.settings" \
-v "/$(pwd)/config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml" \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh
I have a couple of docker volumes i want to backup onto another server, using scp/sftp. I don't know how to deal with that so i decided to have a look at blacklabelops/volumerize GitHub project.
This tool is based on the command line tool Duplicity. Dockerized and Parameterized for easier use and configuration. Tutorial is dealing with a jenkins docker, but i don't understand how to mention i'm want to use a pem file.
I've tried different solution (adding -i option to scp command line) without any success at the moment.
Duplicity man page is mentioning the use of cacert pem files (--ssl-cacert-file option), but i suppose i have to create an env variable when running the docker (with -e option), and i don't know which name to use.
Here what i have so far, can someone please point me in the right direction ?
docker run -d --name volumerize -v jenkins_volume:/source:ro -v backup_volume:/backup -e "VOLUMERIZE_SOURCE=/source" -e "VOLUMERIZE_TARGET=scp://me#serverip/home/backup" blacklabelops/volumerize
The option --ssl-cacert-file is only for host verification not for authentication.
I have found this example on how to add pem files inside an scp command:
scp -i /path/to/your/.pemkey -r /copy/from/path user#server:/copy/to/path
The parameter -i /path/to/your/.pemkey can be passed in blacklabelops/volumerize
with the env variable `VOLUMERIZE_DUPLICITY_OPTIONS``
Example:
$ docker run -d \
--name volumerize \
-v jenkins_volume:/source:ro \
-v backup_volume:/backup \
-e "VOLUMERIZE_SOURCE=/source" \
-e "VOLUMERIZE_TARGET=scp:///backup" \
-e 'VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /path/to/your/.pemkey"' \
blacklabelops/volumerize