Cloc doesn't do a recursive search in Docker container - docker

When I run cloc inside a docker container, it does not seem to recursively search through the given directories, compared to running it stand-alone.
Dockerfile:
FROM python:3.6.2-alpine3.6
VOLUME "/data"
WORKDIR /data
RUN apk --no-cache add cloc=1.72-r2
Running cloc without the docker container I get the following:
cloc src\main\java\ --by-file --unix --report-file=temp.csv
19 text files.
19 unique files.
12 files ignored.
Wrote temp.csv
When running it with the docker container the following happens:
docker run --rm -it -v C:\repos\code-repository\:/data cloc-image cloc src/main/java --by-file --unix --report-file=/data/temp2.csv
0 text files.
0 unique files.
2 files ignored.
Any ideas? I have:
Checked the rights of the user in the docker container (root).
Using ash I checked inside the container if the volume was correctly mapped, and all files were present.
Checked if the version of cloc inside the container was indeed the same as the local installation (both 1.72).
EDIT 1:
Interesting finding, this behaviour only shows on Windows, the same Dockerfile/container works fine on a linux machine.

I encountered the same problem on a debian:stretch-based container running on Docker for Windows, where the volume cloc was running against was a local directory. The solution was to add --follow-links to cloc:
$ cloc .
# Only returned results of top-level directory
Fix:
$ cloc --follow-links .
# Included nested files

Related

Cypress in docker can't find cypress.json file

I'm struggling with testing my app with my Cypress with docker, I use the dedicated docker image with this command : docker run -it -v $PWD:/e2e -w /e2e cypress/included:8.7.0
I have ALWAYS this error when I launch it : `Could not find a Cypress configuration file, exiting.
We looked but did not find a default config file in this folder: /e2e`
Meaning that cypress can't find the cypress.json but it is precisely in the dedicated folder, here is my directory/file tree :
pace
front
cypress
cypress.json
So this is a standard file tree for e2e testing, and despite all of my tricks (not using $PWD but using full directory path, reinstall docker, colima engine etc. nothings works, and if I run npm run cypress locally everything works just fine !
Needless to say that I am in the /pace/front directory when I'm trying these commands
Can you help me please ?
The -v $PWD:/e2e is a docker instruction to mount a volume (a bind mount). It mounts the current directory to /e2e inside the docker container at runtime.
In the docs it mention a structure where it expects the cypress.json file to end up directly under /e2e. To get it do be like this you have to do either:
-v $PWD/pace/front:/e2e
run the command from inside the pace/front directory
Since the CMD and ENTRYPOINT commands in docker run from the WORKDIR you could also try running it from where you were but changing the workdir as:
-w /e2e/pace/front
I have not seen their dockerfile, but my assumption is that that would work.
My personal choice would be to just run it from pace/front

Docker -v command wipes the container

I am creating a docker container that will run a minecraft server. (Yes i know, these already exist). And of course i want the world to be saved when the container is turned off.
This is my dockerfile:
FROM anapsix/alpine-java
COPY ./ /home
CMD ["java","-jar","/home/main.jar"]
EXPOSE 25565
Then i build the container:
docker build -t minecraftdev .
Run the container:
docker run -dp 25565:25565 -v C:/Users/user/server:/home minecraftdev
And then the files in the image, server.properies, the server jar file and EULA.txt is wiped.
Is there another way i don't now of to get the container to store data? And this is without placing the files in the server folder.
Thank you for your answers, i was able to fix it by -v C:/Users/user/server/world:/home/world As the world files are stored in that folder, Instead of changing out all the files in the folder as i didn't know -v did.
Minecraft makes the server.jar file and i don't know how to change so it stores all the files in another place.

LUIS mount points

I am trying to use a custom Dockerfile to build the LUIS container and copy the app file (app exported from the Luis portal) into the container. For this reason, I really don't need the mount points, since the .gz file will already live in the container. Is this possible? It seems that the mount points are always required...
I have to copy the files into the container and the move them to the input location at runtime (using an init.sh script). But, even then the container seemed to not load the app correctly. It behaves differently from that scenario compared to just putting the file in the host folder and mounting that to the container.
UPDATE: When I try to move the files around internally (at the start of the container), LUIS gives this output:
Using '/input' for reading models and other read-only data.
Using '/output/luis/fbfb798892fd' for writing logs and other output data.
Logging to console.
Submitting metering to 'https://southcentralus.api.cognitive.microsoft.com/'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Overriding address(es) 'http://+:80'. Binding to endpoints defined in UseKestrel() instead.
Hosting environment: Production
Content root path: /app
Now listening on: http://0.0.0.0:5000
Application started. Press Ctrl+C to shut down.
fail: Luis[0]
Failed while prefetching App: AppId: d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee - Slot: PRODUCTION Could not find file '/input/d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee_PRODUCTION.gz'.
fail: Luis[0]
Failed while getting response for AppId: d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee - Slot: PRODUCTION. Error: Could not find file '/input/d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee_PRODUCTION.gz'.
warn: Microsoft.CloudAI.Containers.Controllers.LuisControllerV3[0]
Response status code: 404
Exception: Could not find file '/input/d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee_PRODUCTION.gz'. SubscriptionId='' RequestId='d7dfee25-05d9-4af6-804d-58558f55465e' Timestamp=''
^C
nuc#nuc-NUC8i7BEK:/tmp/input$ sudo docker exec -it luis bash
root#fbfb798892fd:/app# cd /input
root#fbfb798892fd:/input# ls
d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee_production.gz
root#fbfb798892fd:/input# ls -l
total 8
-rwxrwxrwx 1 root root 4960 Dec 2 17:35 d6fa5fd3-c32a-44d5-bb7f-d563775cf6ee_production.gz
root#fbfb798892fd:/input#
Notice that even though I can log into the container and browse the location of the model files and they are present, LUIS cannot load/find them.
UPDATE #2 - posting my Dockerfile:
FROM mcr.microsoft.com/azure-cognitive-services/luis:latest
ENV Eula=accept
ENV Billing=https://southcentralus.api.cognitive.microsoft.com/
ENV ApiKey=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ENV Logging:Console:LogLevel:Default=Debug
RUN mkdir /app/inputfiles/
RUN chmod 777 /app/inputfiles/
COPY *.gz /app/inputfiles/
WORKDIR /app
COPY init.sh .
RUN chmod 777 /app/init.sh
ENTRYPOINT /app/init.sh && dotnet Microsoft.CloudAI.Containers.Luis.dll
Option 1
The models can be COPY'd directly into /input/.
e.g.
FROM mcr.microsoft.com/azure-cognitive-services/luis:latest
COPY *.gz /input/
This will work, but requires that you don't mount to /input at runtime as it will squash the COPY'd files. The message "A folder must be mounted" is only logged if the /input directory does not exist.
> docker build . -t luis --no-cache
Sending build context to Docker daemon 40.43MB
Step 1/2 : FROM aicpppe.azurecr.io/microsoft/cognitive-services-luis
---> df4e32e45b1e
Step 2/2 : COPY ./*.gz /input/
---> c5f41a9d8522
Successfully built c5f41a9d8522
Successfully tagged luis:latest
> docker run --rm -it -p 5000:5000 luis eula=accept billing=*** apikey=***
...
Using '/input' for reading models and other read-only data.
...
Application started. Press Ctrl+C to shut down.
Option 2
The configuration value Mounts:Input can be set to configure the input location.
This might be useful if you need your models to live in /app/inputfiles or if you need to mount to /input for another reason at runtime.
e.g.
FROM aicpppe.azurecr.io/microsoft/cognitive-services-luis
ENV Mounts:Input=/app/inputfiles
COPY ./*.gz /app/inputfiles/
This results in:
> docker build . -t luis --no-cache
Sending build context to Docker daemon 40.43MB
Step 1/3 : FROM aicpppe.azurecr.io/microsoft/cognitive-services-luis
---> df4e32e45b1e
Step 2/3 : ENV Mounts:Input=/app/inputfiles
---> Running in b6029a2b54d0
Removing intermediate container b6029a2b54d0
---> cb9a4e06463b
Step 3/3 : COPY ./*.gz /app/inputfiles/
---> 9ab1dfaa36e7
Successfully built 9ab1dfaa36e7
Successfully tagged luis:latest
> docker run --rm -it -p 5000:5000 luis eula=accept billing=*** apikey=***
...
Using '/app/inputfiles' for reading models and other read-only data.
...
Application started. Press Ctrl+C to shut down.
It's true that the input mount won't be necessary if your .gz file is already in the image, but the output mount is used for logging and you may still want that for active learning purposes.
To build your desired image, create a text file named Dockerfile (no extension) and populate it with the following lines:
FROM mcr.microsoft.com/azure-cognitive-services/luis:latest
ENV Eula=accept
ENV Billing={ENDPOINT_URI}
ENV ApiKey={API_KEY}
COPY ./{luisAppId}_PRODUCTION.gz /input/{luisAppId}_PRODUCTION.gz
You can find your {ENDPOINT_URI} and your {API_KEY} using the normal LUIS container instructions, and {luisAppId} will be found in the name of your .gz file of course. Once your Dockerfile is ready, run it from the same folder that contains your .gz file with this command:
docker build -t luis .
Your image will now be ready. All your teammate has to do is run this command:
docker run --rm -it -p 5000:5000
--memory 4g
--cpus 2
--mount type=bind,src={OUTPUT_FOLDER},target=/output luis
{OUTPUT_FOLDER} can be any local absolute path you want as long as it exists. You may also omit the output mount if you don't want any logging:
docker run --rm -it -p 5000:5000 --memory 4g --cpus 2 luis

Copy entire directory from container to host

I'm trying to copy an entire directory from my docker image to my local machine.
The image is a keycloak image, and I'd like to copy the themes folder so I can work on a custom theme.
I am running the following command -
docker cp 143v73628670f:keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
However I am getting the following response -
Error response from daemon: Could not find the file keycloak/themes in container 143v73628670f
When I connect to my container using -
docker exec -t -i 143v73628670f /bin/bash
I can navigate to the themes by using -
cd keycloak/themes/
I can see it is located there and the files are as expected in the terminal.
I'm running the instance locally on a Mac.
How do I copy that entire themes folder to my local machine? What am I doing wrong please?
EDIT
As a result of running 'pwd' your should run the Docker cp command as follows:
docker cp 143v73628670f:/opt/jboss/keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
You are forgetting the trailing ' / '. Therefore your command should look like this:
docker cp 143v73628670f:/keycloak/themes/ ~/Development/Code/Git/keycloak-recognition-login-branding
Also, you could make use of Docker volumes, which allows you to pass a local directory into the container when you run the container

docker: executable file not found in $PATH

I have a docker image which installs grunt, but when I try to run it, I get an error:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
If I run bash in interactive mode, grunt is available.
What am I doing wrong?
Here is my Dockerfile:
# https://registry.hub.docker.com/u/dockerfile/nodejs/ (builds on ubuntu:14.04)
FROM dockerfile/nodejs
MAINTAINER My Name, me#email.com
ENV HOME /home/web
WORKDIR /home/web/site
RUN useradd web -d /home/web -s /bin/bash -m
RUN npm install -g grunt-cli
RUN npm install -g bower
RUN chown -R web:web /home/web
USER web
RUN git clone https://github.com/repo/site /home/web/site
RUN npm install
RUN bower install --config.interactive=false --allow-root
ENV NODE_ENV development
# Port 9000 for server
# Port 35729 for livereload
EXPOSE 9000 35729
CMD ["grunt"]
This was the first result on google when I pasted my error message, and it's because my arguments were out of order.
The container name has to be after all of the arguments.
Bad:
docker run <container_name> -v $(pwd):/src -it
Good:
docker run -v $(pwd):/src -it <container_name>
When you use the exec format for a command (e.g., CMD ["grunt"], a JSON array with double quotes), it will be executed without a shell. This means that most environment variables will not be present.
If you specify your command as a regular string (e.g. CMD grunt) then the string after CMD will be executed with /bin/sh -c.
More info on this is available in the CMD section of the Dockerfile reference.
I found the same problem. I did the following:
docker run -ti devops -v /tmp:/tmp /bin/bash
When I change it to
docker run -ti -v /tmp:/tmp devops /bin/bash
it works fine.
For some reason, I get that error unless I add the "bash" clarifier. Even adding "#!/bin/bash" to the top of my entrypoint file didn't help.
ENTRYPOINT [ "bash", "entrypoint.sh" ]
There are several possible reasons for an error like this.
In my case, it was due to the executable file (docker-entrypoint.sh from the Ghost blog Dockerfile) lacking the executable file mode after I'd downloaded it.
Solution: chmod +x docker-entrypoint.sh
I had the same problem, After lots of googling, I couldn't find out how to fix it.
Suddenly I noticed my stupid mistake :)
As mentioned in the docs, the last part of docker run is the command you want to run and its arguments after loading up the container.
NOT THE CONTAINER NAME !!!
That was my embarrassing mistake.
Below I provided you with the picture of my command line to see what I have done wrong.
And this is the fix as mentioned in the docs.
A Docker container might be built without a shell (e.g. https://github.com/fluent/fluent-bit-docker-image/issues/19).
In this case, you can copy-in a statically compiled shell and execute it, e.g.
docker create --name temp-busybox busybox:1.31.0
docker cp temp-busybox:/bin/busybox busybox
docker cp busybox mycontainerid:/busybox
docker exec -it mycontainerid /bin/busybox sh
In the error message shown:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
It is complaining that it cannot find the executable grunt serve, not that it could not find the executable grunt with the argument serve. The most likely explanation for that specific error is running the command with the json syntax:
[ "grunt serve" ]
in something like your compose file. That's invalid since the json syntax requires you to split up each parameter that would normally be split by the shell on each space for you. E.g.:
[ "grunt", "serve" ]
The other possible way you can get both of those into a single parameter is if you were to quote them into a single arg in your docker run command, e.g.
docker run your_image_name "grunt serve"
and in that case, you need to remove the quotes so it gets passed as separate args to the run command:
docker run your_image_name grunt serve
For others seeing this, the executable file not found means that Linux does not see the binary you are trying to run inside your container with the default $PATH value. That could mean lots of possible causes, here are a few:
Did you remember to include the binary inside your image? If you run a multi-stage image, make sure that binary install is run in the final stage. Run your image with an interactive shell and verify it exists:
docker run -it --rm your_image_name /bin/sh
Your path when shelling into the container may be modified for the interactive shell, particularly if you use bash, so you may need to specify the full path to the binary inside the container, or you may need to update the path in your Dockerfile with:
ENV PATH=$PATH:/custom/dir/bin
The binary may not have execute bits set on it, so you may need to make it executable. Do that with chmod:
RUN chmod 755 /custom/dir/bin/executable
The binary may include dynamically linked libraries that do not exist inside the image. You can use ldd to see the list of dynamically linked libraries. A common reason for this is compiling with glibc (most Linux environments) and running with musl (provided by Alpine):
ldd /path/to/executable
If you run the image with a volume, that volume can overlay the directory where the executable exists in your image. Volumes do not merge with the image, they get mounted in the filesystem tree same as any other Linux filesystem mount. That means files from the parent filesystem at the mount point are no longer visible. (Note that named volumes are initialized by docker from the image content, but this only happens when the named volume is empty.) So the fix is to not mount volumes on top of paths where you have executables you want to run from the image.
If you run a binary for a different platform, and haven't configured binfmt_misc with the --fix-binary option, qemu will be looking for the interpreter inside the container filesystem namespace instead of the host filesystem. See this Ubuntu bug report for more details on this issue.
If the error is from a shell script, the issue is often with the first line of that script (e.g. the #!/bin/bash). Either the command doesn't exist inside the image for a reason above, or the file is not saved as ascii or utf8 with Linux linefeeds. You can attempt dos2unix to fix the linefeeds, or check your git and editor settings.
in my case i order params wrong move all switchs before image name
I got this error message, when I was building alpine base image :
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bash": executable file not found in $PATH: unknown
In my docker-compose file, I had the command directive in which executing command using bash and bash does not come with alpine base image.
command: bash -c "python manage.py runserver 0.0.0.0:8000"
Then I realized and executed command using sh (shell).
It worked for me.
problem is glibc, which is not part of apline base iamge.
After adding it worked for me :)
Here are the steps to get the glibc
apk --no-cache add ca-certificates wget
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
apk add glibc-2.28-r0.apk
Refering to the title.
My mistake was to put variables via --env-file during docker run. Among others the file consisted of a PATH extension: PATH=$PATH:something, which caused PATH var look literally like PATH=$PATH:something (var resolution hadn't been performed) instead of PATH:/usr/bin...:something.
I couldn't make the resolution work through --env-file, so the only way I see this working is by using ENV in Dockerfile.
I ran into this issue using docker-compose. None of the solutions here or on this related question resolved my issue. Ultimately what worked for me was clearing all cached docker artifacts with docker prune -a and restarting docker.
to make it work add soft reference to /usr/bin:
ln -s $(which node) /usr/bin/node
ln -s $(which npm) /usr/bin/npm

Resources