Docker on Windows getting "Could not locate Gemfile" - ruby-on-rails

I'm trying to learn Docker using Windows as the host OS to create a container using Rails image from Docker Hub.
I've created a Dockerfile with the content below and an empty Gemfile, however I'm still getting the error "Could not locate Gemfile".
Dockerfile
FROM rails:4.2.6
The commands I used are the following (not understanding what they actually do though):
ju.oliveira#br-54 MINGW64 /d/Juliano/ddoc
$ docker build -t ddoc .
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM rails:4.2.6
---> 3fc52e59c752
Step 2 : MAINTAINER Juliano Nunes
---> Using cache
---> d3ab93260f0f
Successfully built d3ab93260f0f
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
Unable to find image 'ruby:2.1' locally
2.1: Pulling from library/ruby
fdd5d7827f33: Already exists
a3ed95caeb02: Pull complete
0f35d0fe50cc: Already exists
627b6479c8f7: Already exists
67c44324f4e3: Already exists
1429c50af3b7: Already exists
f4f9e6a0d68b: Pull complete
eada5eb51f5d: Pull complete
19aeb2fc6eae: Pull complete
Digest: sha256:efc655def76e69e7443aa0629846c2dd650a953298134a6f35ec32ecee444688
Status: Downloaded newer image for ruby:2.1
Could not locate Gemfile
So, my questions are:
Why it can't find the Gemfile if it's in the same directory as the Dockerfile?
What does the command docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install do?
How do I set a folder in my host file system to be synced to the container (I'm trying to create a development environment for Rails projects using Docker on Windows)?
I don't know if this makes any difference, but I'm running this command from the bash executed via "Docker Quickstart Terminal" shortcut. I think all it does is run these commands in a default VM, though I could create a new one (but I don't know if I should do this).
Thank you and sorry for all these questions, but right know Docker seems very confusing for me.

You must mount HOST directory somewhere inside your HOME directory (e.g. c:/Users/john/*)

$PWD will give you a Unix-like path. If your shell is like Cygwin, it will look like /cygdrive/c/Users/... or something funny. However, Docker and VirtualBox is a Windows executable, so they expect a plain Windows path. However it seems Docker cannot accept a Windows path in the -v command line, so it is converted to /c/Users/.... The other people may be right; you may not be able to access a directory outside your home for some reason (but I wouldn't know why). To solve your problem, create a junction within your home that points to the path you want, then mount that path in your home.
>mklink /j \users\chloe\workspace\juliano \temp
Junction created for \users\chloe\workspace\juliano <<===>> \temp
>docker run -v /c/Users/Chloe/workspace/juliano:/app IMAGE-NAME ls
007.jpg
...
In your case that would be
mklink /j C:\Users\Juliano\project D:\Juliano\ddoc
docker run -v /c/Users/Juliano/project:/usr/src/app -w /usr/src/app ruby:2.1 bundle install
I don't know what --rm does. I assume -w sets the working directory. -v sets the volume mount and maps the host path to the container path. ruby:2.1 uses the Docker standard Ruby 2.1 image. bundle install run Bundler!

Related

Mounted Docker Volumes Are Empty

Problem
The $(pwd)/app directory locally contains many subdirectories and files. So does the directory in my Docker image /usr/local/app. When I run the following
docker run -it --rm -v $(pwd)/app:/usr/local/app my_image bash
the command is successful, but the /usr/local/app directory is now empty. It seems that I've mounted something to this location, but it's empty.
Is there something that I'm missing here that I should be aware of?
Context
This exact command used to succeed. The only recent change is that I uninstalled Docker Desktop in favor of just using the Docker Engine running on a local minikube instance. I'm using Docker version 20.10.22, build 3a2c30b63a on MacOS Ventura.

code changes in the docker container as a root user

I'm experimenting with the docker concept for C++ development. I wrote a Dockerfile that includes instructions for installing all of the necessary libraries and tools. CMake is being used to build C++ code and install binaries. Because of a specific use case, the binaries should be executed as root user. I'm running the docker container with the following command.
docker run -it --rm -u 0 -v $(pwd):/mnt buildmcu:focal
The issue now is that I am a root user inside the Docker container, and if I make any code changes/create a new file inside the Docker container, I am getting a permission error on the host machine If I try to access it.I need to run the sudo chmod ... to change the permissions.Is there any way to allow the source modification in docker container and host machine without permission error?

understanding docker : how come my docker container content is dynamic?

I want to make sure I understand correctly docker: when i build an image from the current directory I run:
docker build -t imgfile .
What happens when i change the content of a file in the directory AFTER the image is built? From what i've tried it seems it changes the content of the docker image also dynamically.
I thought the docker image was like a zip file that could only be changed with docker commands or logging into the image and running commands.
The dockerfile is :
FROM lambci/lambda:build-python3.8
WORKDIR /var/task
EXPOSE 8000
RUN echo 'export PS1="\[\e[36m\]zappashell>\[\e[m\] "' >> /root/.bashrc
CMD ["bash"]
And the docker run command is :
docker run -ti -p 8000:8000 -e AWS_PROFILE=zappa -v "$(pwd):/var/task" -v ~/.aws/:/root/.aws --rm zappa-docker-image
Thank you
Best,
Your docker run command isn't really running your image at all. The docker run -v $(pwd):/var/task syntax overwrites what was in /var/task in the image with a bind mount to the current directory on the host. So when you edit a file on your host, the container has the same host directory (and not the content from the image) and you see the changes inside the container as well.
You're right that the image is immutable. The image you show doesn't really contain anything, beyond a .bashrc file that won't usually be used. You can try running the image without the -v options to see:
docker run --rm zappa-docker-image ls -al
# just shows `.` and `..` directories
I'd recommend making sure you COPY your application into the image, setting its CMD to actually run the application, and removing the -v option that overwrites its main directory. If your goal is to run host code against host files with host supporting data like your AWS credentials, you're not really getting much benefit from introducing Docker in between your application and every single file it uses.

Volume not mounting properly when running shell inside a container

I want to encrypt my Kubernetes file to integrate it with Travis CI and for that, I am installing Travis CI CLI via docker container. When the container runs and I mount my current working directory to /app It just creates an empty folder.
I have added the folder in shared folders as well in the Virtual Box but nothing seems to work. I am using Docker Toolbox on Windows 10 home.
docker run -it -v ${pwd}:/app ruby:2.3 sh
It creates the empty app folder along with the other folders in the container but does not mount the volumes.
I also tried using
docker run -it -v //c/complex:/app ruby:2.3 sh
as someone suggested to use the name you specify in the Virtual Box.
Docker run -it -v full path of current directory:/app ruby:2

How to sync dir from a container to a dir from the host?

I'm using vagrant so the container is inside vm. Below is my shell provision:
#!/bin/bash
CONFDIR='/apps/hf/hf-container-scripts'
REGISTRY="tutum.co/xxx"
VER_APP="0.1"
NAME=app
cd $CONFDIR
sudo docker login -u xxx -p xxx -e xxx#gmail.com tutum.co
sudo docker build -t $REGISTRY/$NAME:$VER_APP .
sudo docker run -it --rm -v /apps/hf:/hf $REGISTRY/$NAME:$VER_APP
Everything runs fine and the image is built. However, the syncing command(the last one) above doesn't seem to work. I checked in the container, /hf directory exists and it has files in it.
One other problem also is if I manually execute the syncing command, it succeeds but I can only see the files from host if I ls /hf. It seems that docker empties /hf and places the files from the host into it. I want it the other way around or better yet, merge them.
Yeah, that's just how volumes work I'm afraid. Basically, a volume is saying, "don't use the container file-system for this directory, instead use this directory from the host".
If you want to copy files out of the container and onto the host, you can use the docker cp command.
If you tell us what you're trying to do, perhaps we can suggest a better alternative.

Resources