Jenkins not recognizing "docker" command on Windows 7 - docker

I have installed Jenkins and Docker ToolBox on same machine running on Windows 7 .
While running Jenkins build, all the commands work fine except docker.
When I try to run the docker command in build step using Jenkins, it gives me error.
E:\Jenkins\workspace\docker-app>docker build -t docker-app.
'docker' is not recognized as an internal or external command,
operable program or batch file.
But the same command works fine for windows command prompt.
Any help would be much appreciated.

I had exactly same issues until i added docker path to system variable

add the path command to your jJenkins job , make sure it includes docker

As per your description it seems to me that,
You have windows 7 machine with docker toolbox installed.
You are running Jenkins inside one of container?
If yes then you won't able to run docker commands from Jenkins box.
Because You are running Jenkins inside Docker container and Docker is not installed in your docker container that's why it will throw error as 'docker' is not recognized as an internal or external command, operable program or batch file and which is right.
To get this working you need to install Docker inside your docker container that concept is called "Docker-in-Docker".
If you need any help/clarification regarding this then please let me know.

i came across the same issue some time back, hope it will help anyone down the line
even adding docker toolbox in the environment variables didn't work for me
this is what i did
1) go to jenkins --> Manage Jenkins --> configure system
2) go to Global properties section
3) add the following environment variables
a) DOCKER_CERT_PATH = C:\Users\%USER%.docker\machines\default
b) DOCKER_HOST = tcp://192.168.99.XX:2376 ( it might be different in your case )
c) DOCKER_MACHINE_NAME = default
d) DOCKER_TLS_VERIFY = 1
if the problem still persists after the above changes
4) add git binary path to environment variables system path
a) in my case it was C:\Program Files\Git\usr\bin

Related

How to solve "Can't separate key from value" in docker

After upgrading my docker desktop, I get an error when running docker-compose up. My usual setup consists of microservices controlled with the docker-compose command. When I run docker-compose up, all the containers are started. After updating my docker desktop, I get:
Can't separate key from value
while running docker-compose up. How do I fix this?
Check for the version number of docker and if its 3.4+ then the docker compose v2 is enabled by default. To disable it, go to > docker desktop > preferences > experimental features > un-check "use Docker Compose V2" option. This is a move by docker hub to incorporate docker-compose as docker compose and may cause problems to your usual workflow. Enjoy :)
Just in case if anyone else (like me) run into this issue:
Check the local (to docker-compose.yaml) .env file, is there a VARIABLE without mapping? If so... remove it or give it a value...
More specifically maybe:
MY_VAR= // works fine
MY_VAR2 // fails
; MY VAR // also fails
; MY_VAR= // works, but fails later with an actually useful msg

"Building Image" task hangs in VS Code Dev Container when using a large directory

I'm using Visual Studio Code on a Windows machine. I'm trying to setup a Python Dev Container using a directory that contains a large set of CSV files (about 200GB). When I click to launch the remote container in Visual Studio the application hangs saying (Starting Dev Container (show log): Building image.
I've been looking through the docs and having read the Advanced Container Configuation I've tried modifying the devcontainer.json file by adding workspaceMount and workspaceFolder entries:
"workspaceMount" : "source=//c/path/to/folder,target=/workspace,type=bind,consistency=delegated"
"workspaceFolder" : "/workspace"
But to no avail. Is there a solution to launching Dev Containers on Windows using folders which contain large files?
I had a slightly different problem, but the solution might help you or someone else. I was trying to run docker-compose inside a docker-in-docker image (provided by vscode). In my case, my container was able to start, but nothing inside the container was able to run.
To solve my issue, I updated vscode and and now there is a new option Remote-Containers: Clone Repository in Container Volume.... If your code is a git repo, you can do this:
Step #1:
Step #2:
Step #3 and onwards:
Follow the given steps provided by vscode and you should have your repository in the container as a volume. It reduced my building times from about 30mins to 3mins (within the running container) because I brought stuff into the container after it was up and running.
Assuming the 200GB is ignored by your .gitignore, what you could try to do is once the container has started, you can copy the 200GB worth of excel files into the container. I thought this would help because I did a similar thing by bringing in all my node_modules after running the container.

Running Actinia on docker container

I have recently heard about Actinia and I would like to try it out (I am remote sensing anaylst, I am not used to use command line )
I use Windows 10 . I have cloned Actinia on github and trying to use it on my docker container. I changed my windows containers to linux containers. Once I type in on my GitBash
docker-compose build --pull
It stops at step 16/49, while trying to connect to GRASS GIS. It iterates on the same problem,
GRASS GIS libgis version and date number not available
ERROR: Cannot open URL:
and the url he is trying to connect to.
Thus, I wonder if there is a configuration I am missing.
Source: https://github.com/mundialis/actinia_core/tree/master/docker

How to execute a jmeter jmx file from a standard docker container?

I'd like to pull down a standard docker container and then issue it a command that will read and execute a .jmx test file from the current folder (or specified path) and drop the results into the same folder (or another specified path/filename). Bonus points if the stdout from jmeter's console app comes through from the docker run command.
I've been looking into this for quite some time and the solutions I've found are way more complex than I'd like. Some require that I create my own dockerfile and build my own image. Others require that I set up a Docker volume first on my machine and then use that as part of the command. Still others want to run fairly lengthy bash shell scripts. I'm running on Windows and would prefer something that just works with the standard docker CLI running in any Windows prompt (it should work from cmd or PowerShell or bash, not just one of these).
My end goal is that I want to test some APIs using jmeter tests that already exist. The APIs are running in another locally running container that I can execute with a path and port. I want to be able to run these tests from any machine without first having to install Java and jmeter.

Docker-Compose Volume mounting from windows to linux container makes everything executable

Im working on some Ansible stuff that I we have setup in a docker container. when run from a linux system it works great. When run from a windows system I get the following error:
ERROR! Problem running vault password script /etc/ansible-deployment/secrets/vault-dev.txt ([Errno 8] Exec format error). If this is not a script, remove the executable bit from the file.
Basically what this is saying is that the file is marked as an executable. What i've noticed (and hasnt been a huge problem until now) is that all files mounted to a linux container from windows are ALWAYS tagged with the executable attribute.
Is there any way to control/prevent this?
Did you try adding :ro at the end of the mounted path?
Something like this:
HOST:CONTAINER:ro
This is a limitation of the SMB-based approach that Docker for Windows uses for making host-mounted volumes work, see here
To solve the executable bit error, I ended up passing Ansible a python script as the --vault-password-file argument as a workaround, see here.
#!/usr/bin/env python
import os
vault_password = open('PATH_TO_YOUR_VAULT_PASSWORD_FILE', 'r')
print vault_password.read()
vault_password.close()
Since the python script is executed in the container, the vault password file path needs to be accessible in the container - I'm mounting it as a volume, but you can also build it into your image. The latter is a security risk and is not recommended.

Resources