I would really appreciate the help since I have been struggling for days...
I am trying to build a java application using a docker image on a VirtualBox guest (CentOS 7). After doing a git checkout into a directory on the guest Virtualbox, the docker image runs with the following command:
docker run -it --rm --volume $(pwd):/usr/build-app build-monster
As soon as the docker container image starts, it executes a shell script that changes directories into /usr/build-app. This is where the script fails with the following error:
/usr/scripts/docker_start.sh: 2: cd: can't cd to /usr/build-app
Everything else that then references this directory also fails.
Most of the howtos I read have to do with sharing a directory from the Virtualbox Host, which I am not trying to do. When I do a docker inspect I can see the mount is there:
{
"Type": "bind",
"Source": "/root/build-agent-home/xml-data/build-dir/JOB1",
"Destination": "/usr/build-app",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
Inside the docker image's build file I also referenced the directory:
...
WORKDIR /usr/build-app/
COPY docker_start.sh /usr/scripts/docker_start.sh
RUN ["chmod", "+x", "/usr/scripts/docker_start.sh"]
ENTRYPOINT ["/usr/scripts/docker_start.sh"]
The original idea was to set this up in bamboo but after build failures, I have isolated the issue to be with docker itself by running the commands mentioned above. Please help!
Related
I have simple devcontainer config:
devcontainer.json
{
"name": "devcontainer",
"build": {
"dockerfile": "Dockerfile"
},
}
Dockerfile
FROM python:3.11
WORKDIR /usr/src
ENTRYPOINT sleep infinity
I build image with command (using devcontainer cli) devcontainer build --workspace-folder . --image-name devcontainer
Next I create container with command docker create --name container-a and attach to it in vscode.
All file changes saves after container restart, therefor some volumes should exist (as I think), but docker inspect returns empty arr for container-a.
So how can I find these volumes?
I'm struggling with testing my app with my Cypress with docker, I use the dedicated docker image with this command : docker run -it -v $PWD:/e2e -w /e2e cypress/included:8.7.0
I have ALWAYS this error when I launch it : `Could not find a Cypress configuration file, exiting.
We looked but did not find a default config file in this folder: /e2e`
Meaning that cypress can't find the cypress.json but it is precisely in the dedicated folder, here is my directory/file tree :
pace
front
cypress
cypress.json
So this is a standard file tree for e2e testing, and despite all of my tricks (not using $PWD but using full directory path, reinstall docker, colima engine etc. nothings works, and if I run npm run cypress locally everything works just fine !
Needless to say that I am in the /pace/front directory when I'm trying these commands
Can you help me please ?
The -v $PWD:/e2e is a docker instruction to mount a volume (a bind mount). It mounts the current directory to /e2e inside the docker container at runtime.
In the docs it mention a structure where it expects the cypress.json file to end up directly under /e2e. To get it do be like this you have to do either:
-v $PWD/pace/front:/e2e
run the command from inside the pace/front directory
Since the CMD and ENTRYPOINT commands in docker run from the WORKDIR you could also try running it from where you were but changing the workdir as:
-w /e2e/pace/front
I have not seen their dockerfile, but my assumption is that that would work.
My personal choice would be to just run it from pace/front
I am using Docker desktop to develop my application. I have created a "uploadedfiles" docker volume and I am trying to save files to it, from my docker application.
When I save a file from my docker application, I see that the file is saved to the "uploadedfiles" folder on my docker container. I am therefore assuming that I am not binding my application container to the created volume in my Dockerfile.Is my assumption correct?
How can I bind my application container to the created volume in my Dockerfile?
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["InstaTranscribeServerSide/InstaTranscribeServerSide.csproj", "InstaTranscribeServerSide/"]
COPY ["Services/Services.csproj", "Services/"]
COPY ["DataAccess/DataAccess.csproj", "DataAccess/"]
RUN dotnet restore "InstaTranscribeServerSide/InstaTranscribeServerSide.csproj"
COPY . .
WORKDIR "/src/InstaTranscribeServerSide"
RUN dotnet build "InstaTranscribeServerSide.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "InstaTranscribeServerSide.csproj" -c Release -o /app/publish
FROM base AS final
VOLUME CREATE uploadedfiles
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "InstaTranscribeServerSide.dll"]
Docker Volume does not show uploaded files
Container shows that the volume was not "bound"
Container shows file was uploaded to "uploadedfiles" folder on container:
I have successfully mount /home/ folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
I'm having trouble on trying to mount a volume to my docker container.
Here is my project structure
cloudRun
distService/
index.js
Dockerfile
package.json // THIS IS THE package.json FOR THE DOCKER IMAGE
package.json // THIS IS MY MAIN PROJECT package.json
In my main project package.json I have the following scripts:
"docker-run": "./scripts/docker/docker-run.sh",
"docker-inspect": "docker exec -ti hello-world sh" // THIS IS USED TO INSPECT THE RUNNING CONTAINER
docker-run.sh
// STOPS ALL CONTAINERS
// REMOVES ALL CONTAINERS
// REMOVES ALL IMAGES
// BUILDS A NEW IMAGE FROM SCRATCH
docker build --tag hello-world:latest ./cloudRun
// TRYING TO RUN THE CONTAINER WITH A MOUNTED /distService VOLUME
docker run --name hello-world -p 3000:3000 -v //distService:/distService hello-world:latest
This is the Dockerfile:
FROM node:12-slim
WORKDIR /
COPY ./package.json ./package.json
RUN npm install
ENTRYPOINT npm start
It's all working. Except for the fact that the container is seeing /distService as an empty folder.
I know this because when I open a new terminal window and run:
npm run docker-inspect
I get to enter the folders and ls them. And this is what I'm getting: there is a distService folder, but the ls command comes back empty. PS: I did a ls on node_modules just to show that it works and the distService is indeed empty.
QUESTION
When I pass -v //folder:/folder, what is the source folder relative to? How can I be sure that I'm picking the right folder?
Environment:
Windows 10.
Docker for Windows installed
Docker Engine v19.03.13
UPDATE
I ran the container and inspected it with: docker inspect <CONTAINER_ID> to see what folder is being mounted. But it didn't help much. This is what came back:
"HostConfig": {
"Binds": [
"//distService/:/distService/"
],
// OTHER STUFF
"Mounts": [
{
"Type": "bind",
"Source": "/distService", // WHAT IS THIS PATH RELATIVE TO ?
"Destination": "/distService",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
With docker run local folder paths are always relative to the root.
To get a relative path either use the PWD environment variable:
docker run -v $(PWD)/folder:/folder ...
Or, docker-compose does support relative paths if they start with a period:
version: '3.8'
services:
nginx:
image: nginx
volumes:
- ./folder:/www
This is what I had to do to make it work.
Basicallly I had to put the full absolute path starting from c:/.
-v c:/Users/my-user/my-project/distService:/distService
Full command:
docker run --name hello-world -p 3000:3000 -v c:/Users/USER/my-project/distService:/distService hello-world:latest
I had the idea after I found this on the official docs:
When creating a container using docker run, is there a way to automatically copy files from a docker volume to the host directory it is mounted on?
When running
docker run -d -v /localpath:containerpath image
the files found in containerpath are not copied to my /localpath directory.
Is there a way to achieve this? The image contains a directory that needs to be accessible on the host machine for local development.
What i did not know was that when adding a file to a volume from the shell of the container, it is also created in the host directory. So after a lot of debugging and testing things i have managed to achieve what i wanted
For clarification if anyone needs this in the future: to automatically generate a docker container that clones a git repo and expose the host public_html directory to a volume with the files already copied to the host ready for editing
# Create Volume for the directory
VOLUME /var/www/html
COPY scripts/start.sh /start.sh
RUN chmod -v +x /start.sh
CMD ["/start.sh"]
start.sh contains the code to clone the repo if the directory is empty
#!/bin/bash
if [ "$(ls -A /var/www/html)" ]; then
echo "Directory already cloned"
else
echo "Repo files do not exist" ;
git clone ...
fi
Thanks for the help