Docker Tutorial Unclear: "Persisting our DB" and "Using Bind Mounts" - docker

I have only started using Docker and was trying to follow the documentation on the official website... Everything was going smoothly until I got to this point.
In step 3:
Upon running the command, I get this error -> ls: cannot access 'C:/Program Files/Git/': No such file or directory.
I thought it was not that big of a deal so I went ahead and skipped to the following parts of the tutorial.
Then I came across the same error in this part:
I tried to locate the directory on my PC manually and found a remote git repository, but the commands still don't work for me. These were the commands that I have tried and their corresponding errors:
docker run -it ubuntu ls / - No such file or directory
cd /path/to/getting-started/app - No such file or directory
docker run -dp 3000:3000 ` -w /app -v "$(pwd):/app" ` node:12-alpine ` sh -c "yarn install && yarn run dev" - docker: Error response from daemon: the working directory 'C:/Program Files/Git/app' is invalid, it needs to be an absolute path.
See 'docker run --help'. (this error was after changing to the directory I manually searched on my PC)
I'm unsure if I have to set a PATH??? I don't think I have missed any of the steps provided in the earlier tutorials.

Thanks, guys! I was indeed using git bash on VSCode. I tried running it on my Windows terminal via ubuntu and now, everything's working fine. Thanks, Max, and Spears. Exactly what I was having issues with.
These comments helped me resolve the issue:
Maybe this is your problem github.com/docker-archive/toolbox/issues/673 –
Max
Sounds like you are using the git bash which comes packages with git scm for >windows. I strongly recommend to avoid this and switch to WSL2. The git bash >is NOT the kind of shell you are looking for when using docker due to missing >libs and nasty side effects which are mostly very hard to debug. - Spears

Related

WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory

I configured deis workflow in aws eks cluster. after that created deis apps and deployed in deis local repository by,
git push test test:master
when deploying, docker file is executed. here is my docker file
FROM mhart/alpine-node:12
#FROM ubuntu:18.04
ARG SOURCE_VERSION=na
ENV SOURCE_VERSION=$SOURCE_VERSION
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/v3.9 --update bash && rm -rf /var/cache/apk/*
#apt-get update &&\
#apt-get install -y make gcc wget
WORKDIR /app
ADD . .
RUN npm install
EXPOSE 3200
CMD ["node", "app.js"]
this results error like,
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/community: No such file or directory
ERROR: unable to select packages:
bash (no such package):
required by: world[bash]
The command '/bin/sh -c apk add --update bash && rm -rf /var/cache/apk/*' returned a non-zero code: 1
remote: 2021-11-15 13:30:22.569253 I | Error running git receive hook [Build pod exited with code 1, stopping build]
To ssh://deis-builder.app-test.paceup.io:2222/pu-api-gateway.git
! [remote rejected] test -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://git#deis-builder.app-test.paceup.io:2222/pu-api-gateway.git'
I am totally new to docker, deis and eks. if anyone can help it would be grateful
Finally found the answer is that we have configured nodegroup setup in amazon linux which didn't support this deployment. we changed the nodegroup for eks optimized ubuntu and deployed the app using docker and working fine.
Edit:
This is working in some of the Linux versions. In my case it's working on EKS version 1.9 but not working in EKS version 2.0 and above.
This error may come due to DNS issue also while building the docker image pus the dns flag and mention google dn 8.8.8.8. Or edit the resolv.conf and add the nameserver 8.8.8.8 in the container
I hope this may help
I had this problem when my machine had many symptoms of a network configuration problem:
A Dockerfile that had to download zip files from the net could not do this anymore and threw the warning in question which stopped the build. I could download the zip files when entering the URL:s in the browser instead, it was a problem of the container. I checked the same Dockerfile on another healthy machine and the build ran through.
I had lost the connection to the internal dns server. I could not ping another machine by its name anymore, but had to use its internal IP, although the day before, the ping had worked.
I could see any GCP project items only in Firefox incognito mode.
Answer insofar is: change the machine and test whether it does not work only on your machine. If that is true, the workaround is already done. As the next step, try to fix any other network problems, and it is likely that this will get rid of the warning.
UPDATE: The problem was a running container that gave my machine its own network. When I ran docker-compose down, the network worked again. When I removed the network from the docker-compose file, the download from inside the container worked again, the warning in question was gone.

Run docker load inside RPM file

I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !

No container found in path - DOCKER

I am trying to download a docker image called anchor-engine found at the following link: https://hub.docker.com/r/anchore/anchore-engine/
For ease, I will post a copy of the code used to create the image and get it running as they have specified.
Here is a link to the image, I tried posting the image, but it requires reputation 10.
The issue I am having is specifically on this line of the download:
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
I get the following error message from my terminal:
Error: No such container:path: ae:/docker-compose.yaml
My question is how do I fix this?
I am not good at $PATH.
After echoing $PATH with echo "$PATH", I just see a very messy blob of path and I do not really know how to create the necessary container in the directory specified, which was the first line mkdir ~/aevolume.
The thing is it specifically requires a container and when I type ls, it gives a blank response.
Please help and thanks.
The lines I have been able to run are:
mkdir ~/aevolume
cd ~/aevolume
docker pull docker.io/anchore/anchore-engine:latest
docker create --name ae docker.io/anchore/anchore-engine:latest
but when I try running
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
As I have done in this following line:
aevolume admin$ docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
I get this response:
Error: No such container:path: ae:/docker-compose.yaml
It seems like there's an issue with the latest version of the Anchore image that was updated 14 days ago. I've been experiencing the same issue. I went ahead and used version 0.7.0 for the commands and it worked fine:
mkdir ~/aevolume
cd ~/aevolume
docker pull anchore/anchore-engine:v0.7.0
docker create --name ae anchore/anchore-engine:v0.7.0
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
Hopefully this helps, I was stuck on it for a bit haha.
#Sandeep Kumar There is a correction in the last command.
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
If you are unable to get docker-compose.yaml for anchore, try this:
curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
Here is the reference:
https://docs.anchore.com/current/docs/engine/quickstart/
It worked well for myself after I ran into the same issue as described above.

dpkg not working the same way when invoked from Dockerfile or within the container

I have a Dockerfile describing a container used to build some libs.
Basically, it looks like this:
FROM debian:stretch-slim
COPY somedebianrepo/*.deb \
/basedir/
RUN dpkg -i /basedir/*.deb
When I build the image, I get :
dpkg: dependency problems prevent configuration of [one of my lib] ... depends on [some other lib] however [some other lib] is not installed
Which may sound obvious... but : when I comment the RUN line :
# RUN dpkg -i /basedir/*.deb
then build the image, start the container, and connect to it, I expected the dpkg command to act the same... But actually, when I launch directly the command works fine with no such error.
root#host$ docker exec -it -u root <mycontainer> bash
root#mycontainer $ dpkg -i /basedir/*.deb
root#mycontainer $ (no error)
I also tried with apt-get install, and also encountered such different behaviors.
Since I am quite newbie with Docker, the answer may be quite obvious... but still, it is not to me! I expected the commands executed through "RUN" to act the same way as if executed from within the container..
So if anyone could point out me where I am wrong, she/he is welcome!
EDIT 1 : I have tried to run apt-get update before the dpkg command, though I did not expect it to work : with no success

How can you cache gradle inside docker?

I'm trying to cache things that my gradle build download each time currently. For that I try to mount a volume with the -v option like -v gradle_cache:/root/.gradle
The thing is each time I rerun the build with the exat same command it still downloads everything again. The full command I use to run the image is
sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar
I also checked in the directory where docker saves the volumes content at /var/lib/docker/volumes/gradle_cache/_data but that is also empty.
my console log
What am I missing to make this working?
Edit: As per request I rerun the command with the --scan option.
And also with a diffrent gradle home:
$ sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar --gradle-user-home /root/.gradle
FAILURE: Build failed with an exception.
* What went wrong:
Failed to load native library 'libnative-platform.so' for Linux amd64.
After looking at the Dockerfile for the Container I'm using I found out, that the right option to use is -v gradle_cache:/home/gradle/.gradle.
What made me think that the files were cached in /root/.gradle is that the Dockerfile also sets that up as a symlink from /home/gradle/.gradle:
ln -s /home/gradle/.gradle /root/.gradle
So inspecting the filesystem after a build made it look like the files were stored there.
Since 6.2.1, Gradle now supports a shared, read-only dependency cache for this scenario:
It’s a common practice to run builds in ephemeral containers. A container is typically spawned to only execute a single build before it is destroyed. This can become a practical problem when a build depends on a lot of dependencies which each container has to re-download. To help with this scenario, Gradle provides a couple of options:
copying the dependency cache into each container
sharing a read-only dependency cache between multiple containers
https://docs.gradle.org/current/userguide/dependency_resolution.html#sub:ephemeral-ci-cache describes the steps to create and use the shared cache.
Alternatively to have more control on the cache directory you can use this:
ENV GRADLE_USER_HOME /path/to/custom/cache/dir
VOLUME $GRADLE_USER_HOME

Resources