I have limited knowledge of docker. But this is what I have done. I installed docker desktop. Pulled images for influxdb 1.8 and grafana and loadimpact/k6. Created containers for influxdb and grafana which are running fine.
http://localhost:3000/ -> working
http://localhost:8086/ -> gives 404 page not found
I want to run my k6 script in the docker, save result in the influxdb and then use grafana to create custom dashboards based on data in influxdb.
When I give the following command from the command prompt from the folder in which K6 script is present:
docker run -v /k6 -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db - <K6-script.js
I get the following error.
time="2021-10-16T10:09:58Z" level=error msg="The moduleSpecifier \"./libs/shim/core.js\" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker.\n\tat reflect.methodValueCall (native)\n\tat file:///-:205:34(24)\n" hint="script exception"
The folder is which K6-script.js is present, two more folders are present K6 and libs which are imported in the K6-script.js .
Then I referred [https://k6.io/docs/using-k6/modules/#local-filesystem-modules][1] and gave the following command
docker run -v //c/loadtesting:/src -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db K6-script.js
which gives me the following error.
level=error msg="The moduleSpecifier \"K6-script.js\" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker. Additionally it was tried to be loaded as remote module by prepending \"https://\" to it, which also didn't work. Remote resolution error: \"Get \"https://K6-script.js\": dial tcp: lookup K6-script.js on 192.168.65.5:53: no such host\""
How do I resolve this error and run K6 script in the docker using influxdb?
after much trial and error when I gave following command , the test ran. It couldn't connect to to InfluxDB database but that is another issue I need to resolve . But otherwise the test ran.
docker run -v //c/loadtesting:/src -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db /src/K6-script.js
I think it needed the path of script which is inside the container to run the script.
Related
I have a linux vm on which I installed docker. I have several docker containers with the different programs I have to use. Here's my architecture:
Everything is working fine except for the red box.
What I am trying to do is to dynamically provide a jenkins docker-in-docker agent with the cloud functionality in order to build my docker images and push them to the docker registry I set up.
I have been looking for documentation to create a docker in docker container and I found this:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
This article states that in order to avoid problems with my main docker installation I have to create a volume:
-v /var/run/docker.sock:/var/run/docker.sock
I tested my image locally and I have no problem to run
docker run -d -v --name test /var/run/docker.sock:/var/run/docker.sock
docker exec -it test /bin/bash
docker run hello-world
The container is using the linux vm docker installation to build and run the docker images so everything is fine.
However, I face problems when it comes to the jenkins docker cloud configuration.
From what I gather, since the #826 build, the docker jenkins plugin has change its syntax for volumes.
This is the configuration I tried:
And the error message I have when trying to launch the agent:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"create
/var/run/docker.sock: \"/var/run/docker.sock\" includes invalid characters for a local
volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a
host directory, use absolute path"}
I also tried that configuration:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"invalid mount config for type \"volume\": invalid mount path: './var/run/docker.sock' mount path must be absolute"}
I do not get what that means as on my linux vm the docker.sock absolute path is /var/run/docker.sock, and it is the same path inside the docker in docker I ran locally...
I tried to check the source code to find what I did wrong but it's unclear what the code is doing for me (https://github.com/jenkinsci/docker-plugin/blob/master/src/main/java/com/nirima/jenkins/plugins/docker/DockerTemplateBase.java, from row 884 onward), I also tried with backslashes, etc. Nothing worked.
Has anyone any idea what is the expected syntax in that configuration panel for setting up a simple volume?
Change the configuration to this:
type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock
it is not a volume, it is a bind type.
This worked for me
type=bind,source=/sys/fs/cgroup,target=/sys/fs/cgroup,readonly
I've been trying to run a nextflow pipeline with a Docker image I've created on a server. I've tested this pipeline on my local client and it works fine but trying to run it on a server (ArchLinux, docker version 18.09.6) gives me many different errors. The problem is that the pipeline requires a huge database (NCBI:nt ~120GB) as an "input" (just read, not modify it). On the local client, I've used the temp flag for nextflow, which is equivalent to --mount type=volume,src=<src_path>,target=/tmp flag. This works perfectly on the local client. Once I've uploaded it to the server, I get different problems. I've been accessing the server using ssh (Window'sPowerShell and wsl2). I've tried using the following options:
Using --mount type=bind,src=<src_path>,target=/output:
I get the following error:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: <src_path>/. The same occurs if many different flags (e.g. readonly) or different propagation forms are used.
Using -v <src_path>:/output: A different error is given:
docker: Error response from daemon: error while creating mount source path '<src_path>': mkdir /share/library: permission denied.. I find this error quite unusual since my user does has the permissions to create files and directories in the src-path. Is there any way of forcing docker to use the permissions of my user?
Is --mount or -v even the right way of accessing this database from within the container? Any help or idea is always welcome since nothing I've found seems to bring me forward...
EDIT:
Rather than a "nextflow" question, it is more of a docker question, since running docker run <any_option_mentioned_above> <img_name>
returns the same errors.
This is the setup I've used to run nextflow processes inside docker containers:
main:
process mount_example {
label 'dockerised'
input:
file foo from bar
containerOptions "-v /path/to/source:/path/to/target/inside/docker/"
script:
"""
ls /path/to/target/inside/docker/
"""
}
and in config:
docker {
enabled = true
registry = 'yourdockerregister'
}
process {
withLabel: dockerised
{
container: yourdockerregistry
}
}
One of the features of a host mount that docker provides is it will create the folder if it doesn't already exist. Otherwise, when doing a bind mount in Linux, which is what this option is doing:
--mount type=bind,src=<src_path>,target=/output
Linux will not create the directory for you and the mount command will fail. To resolve you can switch back to a host mount, or create the directory in advance.
Complete docker noob here, i installed docker desktop on windows - Trying to follow the commands on this link to setup OSRM backend on my machine. i've downloaded the dataset for india(india-latest.osm.pbf) to D:/docker
and am running the commands from that location
docker run -t -v "${PWD}:/data" osrm/osrm-backend osrm-extract -p /opt/car.lua /data/india-latest.osm.pbf
fails with
[error] Input file /data/india-latest.osm.pbf not found!
i just don't understand WHY it doesn't work. according to osrm documentation of the docker command -
The file /data/india-latest.osm.pbf inside the container is referring
to "${PWD}/india-latest.osm.pbf" on the host.
but it's not the case,i am running from d:/docker so it should find india-latest.osm.pbf no problem. This is really really confusing to me even though it must be so basic
it was due to a bug in docker https://github.com/docker/for-win/issues/1712
when you change password it silently fails for commands that access the host filesystem on windows until you reauthenticate
I have two simple containers, web and db. I built and can successfully up the containers via docker-compose on both Windows and Ubuntu. However, when I attempt to up on Photon, I get the following error for my web container.
Handler for POST /v1.21/containers/.../start returned error: Container command 'apache2-foreground' not found or does not exist.
But when I build the image based on the Dockerfile, and docker run web, it launches and runs fine. Any ideas about this error?
apache2-foreground is a command (script) that calls apache2 -DFOREGROUND (see httpd/php repos/containers). It's the command automatically run by php/httpd containers
If you run into a problem running a command from docker-compose that will ordinarily run with docker then it could probably be a bug - see this for instance
It could also be the case that you actually have bad paths in your docker-compose.yml volume mappings
To create a Docker container in Bluemix we need to install container plug-ins and container extension. After installing container extension Docker should be running but it show error as :
root#oc0608248400 Desktop]# cf ic login
** Retrieving client certificates from IBM Containers
** Storing client certificates in /root/.ice/certs
Successfully retrieved client certificates
** Checking local docker configuration
Not OK
Docker local daemon may not be running. You can still run IBM Containers on the cloud
There are two ways to use the CLI with IBM Containers:
Option 1) This option allows you to use `cf ic` for managing containers on IBM Containers while still using the docker CLI directly to manage your local docker host.
Leverage this Cloud Foundry IBM Containers plugin without affecting the local docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2) Leverage the docker CLI directly. In this shell, override local docker environment to connect to IBM Containers by setting these variables, copy and paste the following:
Notice: only commands with an asterisk(*) are supported within this option
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/root/.ice/certs
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
exec: "docker": executable file not found in $PATH
Please suggest what should I go next.
the error is already telling you what to do:
exec: "docker": executable file not found in $PATH
means to find the executable docker.
Thus the following should tell you where it is located and that would needed to be append to the PATH environment variable.
dockerpath=$(dirname `find / -name docker -type f -perm /a+x 2>/dev/null`)
export PATH="$PATH:$dockerpath"
What this will do is search the root of the filesystem for a file, named 'docker', and has the executable bit set while ignoring error messages and returns the absolute path to the file as $dockerpath. Then it exports this temporarily.
The problem seems to be that your docker daemon isn't running.
Try running:
sudo docker restart
If you've just installed docker you may need to reboot your machine first.