I'm brand new to Docker, and I'm trying to run a Dockerfile on my Windows 10 machine, but it's hanging initially and not doing anything.
My Dockerfile:
FROM busybox:latest
CMD ["date"]
My command from docker
$ docker build -f /projects/docker_test .
Other things of note:
Docker Toolbox installed on Windows 10 Home edition
Environmental variable:
HOME = G:\projects\
Dockerfile location:
G:\projects\docker_test\Dockerfile
File created initially with Notepad.
EDIT: I am able to load other docker containers just fine. Docker simply hangs when I try to access a local Dockerfile.
What worked for me was adding a .dockerignore file and add there the folders that are not part of the built image (in my case /node_modules).
The -f option is used to specify the path to the Dockerfile.
Try with:
docker build -t docker_test -f /projects/docker_test/Dockerfile /projects/docker_test
or:
cd G:\projects\docker_test\
docker build -t docker_test .
The reason for this is If we have any other folders or nested folders and files present in the same directory. Then this is happening and resolution is either to add .dockerignorefile or just move it to a folder and then move to that folder from command prompt and then execute docker build command.
Related
I am new to Docker. I created a Web API using ASP.Net Core using Visual Studio 2019 as well as in VS Code. It works fine. Then I added docker support and added Dockerfile with default values.
When I try to build the docker image, it fails in Visual Studio 2019 as well as in VS Code.
However, If I try to run the Docker image using the Visual Studio 2019 provided option (where I can select docker as run), then the image gets created.
But when I run the build command in Visual Studio 2019 or VS Code i.e.
docker build -f ./Dockerfile --force-rm -t mytestapp:dev ..
it throws following error<br>
=> ERROR [build 3/7] COPY [myTestApp.csproj, ./]
Content of my docker file is given below
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["myTestApp.csproj", "./"]
RUN dotnet restore "myTestApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "myTestApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myTestApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myTestApp.dll"]
The project structure picture is also attached:
A simple docker build command cannot work with the default Dockerfiles created by Visual Studio because the paths are specified relative to the root of the solution, and not the root of the project.
You can inspect the build output from VS to determine how it builds the image (simplified version):
docker build
-f "PROJECT_PATH\Dockerfile"
-t IMAGE_NAME:dev
"SOLUTION_PATH"
As you can see, it builds using the Dockerfile in the project folder (-f), but from the solution folder.
I guess they did it because it has the advantage of keeping each Dockerfile in its own project folder, while letting you reference resources outside that folder using more consistent solution-based paths. Apart from that, it's pretty annoying.
You can move the Dockefile to the solution folder and leave it unchanged, but then the Docker features in VS will stop working as expected. Or you can adopt the VS convention and adapt your scripts accordingly.
Try running the command from the parent folder, you can specify the path to the Dockerfile using the -f flag.
cd ..
docker build -t imagename:tag -f ProjectDir/Dockerfile .
Docker copy's the .csproj and other files from the current location on the host machine, so if you say:
COPY ["myTestApp.csproj", "./"]
Make sure you are in the right directory on the host machine. The Dockerfile created by Docker Support is not always ideal for building images if you use for example other project references but can be a good base.
Run this from your Solution root:
docker build . -f [ProjectDir]\Dockerfile
Answer from *#axtc*k worked for me. The only change required to make it work was to remove the slash:
cd ..
docker build -t imagename:tag -f ProjectDir/Dockerfile .
Use docker-compose to easily create and tear down your setup.
Step 1: Save code below as docker-compose.yml one directory higher than your Dockerfile (same path as your project's .sln file):
version: '3'
services:
web:
build:
context: .
dockerfile: [PROJECTNAME]\Dockerfile
ports:
- "5000:80"
networks:
- aspcore-network
sql-server:
image: mcr.microsoft.com/mssql/server
networks:
- aspcore-network
networks:
aspcore-network:
driver: bridge
Step 2: Add additional services (MYSQL/REDIS/ETC)
Step 3: Open terminal to docker-compose.yml location
Step 4: Run docker-compose build then docker-compose up -d
Step 5: When done run docker-compose down
Remove the .(dot) you included at WORKDIR "/src/."
I solved this issue by providing the absolute path to the docker command.
Instead, go to the parent directory, with the .sln file and use the docker -f option to specify the Dockerfile to use in the subfolder:
cd \CoreDockerAPI
docker build -f CoreDockerAPI\Dockerfile --force-rm -t myfirstimage .
docker run -it myfirstimage
Here are the steps I used to solve this problem :
I checked Enable Docker while creating my .NET 5 Web API Project.
For Docker OS, I chose Linux.
Then I opened a terminal, navigated to the directory where my project is and typed the following command : docker build -f Movie.WebAPI\Dockerfile --force-rm -t movie-api:v1 .
Which gave the following results :
for path
for result
continuation of the result
As the last step, I ran this command : docker run -it --rm -p 8080:80 movie-api:v1
Which created the container image.
Now movie-api appears when I type docker images.
How can I add a file from my project into a Docker using in a gitlab-ci job. Suppose I have below job in my .gitlab-ci.yml .
build:master:
image: ubuntu:latest
script:
- cp sample.txt /sample.txt
stage: build
only:
- master
How to copy a sample.txt inside Ubuntu image? I was thinking as it is already a running container so we can't perform copy command directly but have to run
docker cp sample.txt mycontainerID:/sample.txt
but again how will I get mycontainerID? because it will be running inside a Gitlab runner and any random id will be assigned for every run. Is my assumption is wrong?
The file is already inside the container. If you read carefully through the CI/CD build log, you will see at the very top after pulling the image and starting it, your repository is cloned into the running container.
You can find it under /builds/<organization>/<repository>
(note that these are examples, and you have to adjust to your actual organization and repository name)
Or with the variable $CI_PROJECT_DIR
In fact, that is the directory you are in when starting the job.
For example, this .gitlab-ci.yml:
image: alpine
test:
script:
- echo "the project directory is - $CI_PROJECT_DIR"
- cat $CI_PROJECT_DIR/README.md ; echo
- echo "and where am I? - $PWD"
returns this pipeline output:
As you can see, I could print out the content of the README.md, inside the container.
We do not need to copy. The repository files will be available in the image. GitLab does that for us.
Type to use ls(linux) or dir(windows) command depending your platform to list files and folders.
Your runner is already executing script in your docker container.
What your job does here is:
start a container using Ubuntu image and mount your Git project in
there.
cp sample.txt from Git project's root to container's
stop the container saying "job done"
That's basically what image means: use this docker image to start a container that will execute the commands listed in the script part.
I don't exactly understand what you're trying to achieve. If it's a build job, then why don't you actually COPY your file from your Dockerfile and configure your job to build it with docker build ? A Runner shell executor doing docker build -t your/image:latest -f build/Dockerfile . will do just fine. Then you push this image in some Docker registry (Gitlab's for example, or Docker Hub).
If really your goal is more complex and you want to just add a file to a running container, you can use the same Runner (with a shell executor, not a docker one, so no image) and run something like
- docker run --name YOUR_CONTAINER_NAME -v $PWD:/mnt ubuntu:latest cp /mnt/sample.txt /sample.txt
- docker commit -m "Commit Message" -a "You" YOUR_CONTAINER_NAME your/image:latest
- docker push your/image:latest
- docker rm YOUR_CONTAINER_NAME
Note: I'm not 100% sure the first one would work, but that would be the general idea of creating an image from a container without relying on the actual Dockerfile if really you can't achieve your goal with a Dockerfile.
I'm trying to copy an entire directory from my docker image to my local machine.
The image is a keycloak image, and I'd like to copy the themes folder so I can work on a custom theme.
I am running the following command -
docker cp 143v73628670f:keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
However I am getting the following response -
Error response from daemon: Could not find the file keycloak/themes in container 143v73628670f
When I connect to my container using -
docker exec -t -i 143v73628670f /bin/bash
I can navigate to the themes by using -
cd keycloak/themes/
I can see it is located there and the files are as expected in the terminal.
I'm running the instance locally on a Mac.
How do I copy that entire themes folder to my local machine? What am I doing wrong please?
EDIT
As a result of running 'pwd' your should run the Docker cp command as follows:
docker cp 143v73628670f:/opt/jboss/keycloak/themes ~/Development/Code/Git/keycloak-recognition-login-branding
You are forgetting the trailing ' / '. Therefore your command should look like this:
docker cp 143v73628670f:/keycloak/themes/ ~/Development/Code/Git/keycloak-recognition-login-branding
Also, you could make use of Docker volumes, which allows you to pass a local directory into the container when you run the container
I want to copy my compiled war file to tomcat deployment folder in a Docker container. As COPY and ADD deals with moving files from host to container, I tried
RUN mv /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
as a modification to the answer for this question. But I am getting the error
mv: cannot stat ΓÇÿ/tmp/projects/myproject/target/myproject.warΓÇÖ: No such file or directory
How can I copy from one folder to another in the same container?
You can create a multi-stage build:
https://docs.docker.com/develop/develop-images/multistage-build/
Build the .war file in the first stage and name the stage e.g. build, like that:
FROM my-fancy-sdk as build
RUN my-fancy-build #result is your myproject.war
Then in the second stage:
FROM my-fancy-sdk as build2
COPY --from=build /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
A better solution would be to use volumes to bind individual war files inside docker container as done here.
Why your command fails
The command you are running tries to access files which are out of context to for the dockerfile. When you build the image using docker build . the daemon sends context to the builder and only those files are accessible during the build. In docker build . the context is ., the current directory. Therefore, it will not be able to access /tmp/projects/myproject/target/myproject.war.
Copying from inside the container
Another option would be to copy while you are inside the container. First use volumes to mount the local folder inside the container and then go inside the container using docker exec -it <container_name> bash and then copy the required files.
Recommendation
But still, I highly recommend to use
docker run -v "/tmp/projects/myproject/target/myproject.war:/usr/local/tomcat/webapps/myproject.war" <image_name>
I'm creating a Dockerfile that uses a base image: dockerfile/rabbitmq.
In the Dockerfile for rabbitmq there's a line to install a script into the image:
ADD bin/rabbitmq-start /usr/local/bin/
In my Dockerfile I don't have this line. I have my own ADD lines.
When I run the image all the rabbitmq binaries and config are there, along with my stuff, but there's no rabbitmq-start script anywhere.
Why isn't it present in my image? (If I run the base image dockerfile/rabbitmq the file is there, of course.) Are ADD's not "inherited" to derived images?
Seems to work for me:
I cloned dockerfile/ubuntu and built that locally,
I cloned dockerfile/rabbitmq and built that locally,
I cloned your repository and built that locally.
Booting a shell in your image:
docker run -it --rm gzoller/world bash
I see both the rabbitmq-start script added by the rabbitmq image as well as the start script installed in your image:
[ root#d0044b91278e:/data ]$ ls /usr/local/bin/
rabbitmq-start start