I run want to know how to automate my npm project better with docker.
I'm using webpack with a Vue.js project. When I run npm run buld I get a output folder ./dist this is fine. If I then build a docker image via docker build -t projectname . and run this container all is working perfectly.
This is my Dockerfile (found here)
FROM httpd:2.4
COPY ./dist /usr/local/apache2/htdocs/
But it would be nice if I could just build the docker image and not have to build the project manually via npm run build. Do you understand my problem?
What could be possible solutions?
If you're doing all of your work (npm build and others) outside of the container, and have infrequent changes you could use a simple shell script to wrap the two commands.
If you're doing more frequent iterative development you might consider using a task runner (grunt maybe?) as a container service (or running it locally).
If you want to do the task running/building inside of Docker, you might look at docker-compose. The exact details of how to set this would would require more detail about your workflow, but docker-compose makes it relatively easy to define & link multiple services in a single file, and start and stop them with a simple set of commands.
Related
I'm developing app with few people and decided to fully dockerize dev environment (for easier deployment and collaboration). I did it by making super short Dockerfile.dev for services such as this one
FROM node:18
COPY ./frontend/dev_autostart.sh .
Each service has it's own dev_autostart.sh script that waits for volume to be mounted, then depending on the service it installs dependencies and runs it in dev mode (f.e. 'npm i && npm run dev').
My issue with this setup is that installing dependencies is harder because to do that i'd need to stop container and install dependencies using my machine or change docker entrypoint to something like 'tail -F anything'.
My ideal workflow would be
'docker attach $containerName'
Somehow stop current process without killing the container
Do stuff
Rerun process manually (cargo run . / npm run dev)
Is there any way to achieve that?
I'm trying to learn how to write a Dockerfile. Currently my strategy is as follows:
Guess what commands are correct to write based documentation.
Run sudo docker-compose up --build -d to build a docker container
Wait ~5 minutes for my anaconda packages to install
Find that I made a mistake on step 15, and go back to step 1.
Is there a way to interactively enter the commands for a Dockerfile, or to cache the first 14 successful steps so I don't need to rebuild the entire file? I saw something about docker exec but it seems that's only for running containers. I also want to try and use the same syntax as I use in the dockerfile (i.e. ENTRYPOINT and ENV) since I'm not sure what the bash equivalent is/if it exists.
you can run docker-compose without the --build flag that way you don't have to rebuild the image every time, although as you are testing the Dockerfile i don't know if you have much options here; the docker should cache automatically the builds but only if there's no changes from the last time that you made a build, and there's no way to build a image interactively, docker doesn't work like that, lastly, the docker exec is just to run commands inside the container that was created from the build.
some references for you: docker cache, docker file best practices
I'm using the new experimental docker buildkit syntax to do a multistage build, as so:
Dockerfile:
RUN --mount=type=cache,target=/home/build/.build-cache,gid=1000,uid=1001 ./build
bash:
DOCKER_BUILDKIT=1 docker build .
Works great locally. On CI I get a new docker environment every time, so no caching.
I can export and import files into the env, but I don't know where the cache is located. Any ideas?
Or should I be exporting/importing the cache via some docker command? I've read https://docs.docker.com/engine/reference/commandline/build/#specifying-external-cache-sources and https://github.com/moby/buildkit#export-cache but it's not clear to me which is buildkit specific, which docker specific or if either really applies to this cache mounted into the Dockerfile RUN command.
I have added a public gist here of a failing test demonstrating what I was hoping for:
https://gist.github.com/Mahoney/85e8549892e0ae5bb86ce85339db1a71/6308f1bdb062a8982017193b96d61ec00a7698c5
And this later revision works, but I'm not happy with it - too much bootstrapping:
https://gist.github.com/Mahoney/85e8549892e0ae5bb86ce85339db1a71
There doesn't seem to be any way to extract this specific cache from the general docker working files.
However, you can of course backup the whole of /var/lib/docker. This doesn't work for CircleCI's remote docker engine, because you don't have sudo access, but does work for GitHub Actions where you do.
See here for an example:
https://github.com/Mahoney-playground/docker-cache-action
I am working on trying to improve my development environment using Docker and Go but I am struggling to get auto reloading in my containers working when there is a file change. I am on Windows running Docker Desktop version 18.09.1 if that matters.
I am using CompileDaemon for reloading and my DockerFile is defined as follows
FROM golang:1.11-alpine
RUN apk add --no-cache ca-certificates git
RUN go get github.com/githubnemo/CompileDaemon
WORKDIR /go/src/github.com/testrepo/app
COPY . .
EXPOSE 8080
ENTRYPOINT CompileDaemon -log-prefix=false -directory="." -build="go build /go/src/github.com/testrepo/app/cmd/api/main.go" -command="/go/src/github.com/testrepo/app/main"
My project structure follows
app
api
main.go
In my docker-compose file I have the correct volumes set and the files are being updated in my container on when I make changes locally.
The application is also started correctly using CompileDaemon on its first load, its just not ever updated on file changes.
On first load I see...
Running build command!
Build ok.
Restarting the given command.
Then any changes I make are not resulting in a restart even though I can connect to the container and see that the changes are reflected in the expected files.
Ensure that you have volumes mounted for the services you are using, it is what makes the hot reloading work inside of a Docker container
See the full explanation
Proper way of installing the Compile Daemon is
RUN go install -mod=mod github.com/githubnemo/CompileDaemon
Reference: https://github.com/githubnemo/CompileDaemon/issues/45#issuecomment-1218054581
I've engineering background mostly with coding/dev't than deployment. We have introduced Microservices recently to our team and I am doing POC on deploying these Microservices to Docker. I made a simple application with maven, Java 8 (not OpenJdk) and jar file is ready to be deployed but I stuck with the exact steps on how to deploy and run/test the application on Docker container.
I've already downloaded Docker on mac and went over this documentation but I feel like there are some steps missing in the middle and I got confused.
I appericiate your help.
Thank you!
If you already have a built JAR file, the quickest way to try it out in docker is to create a Dockerfile which uses the official OpenJDK base image, copies in your JAR and configures Docker to run it when the container starts:
FROM openjdk:7
COPY my.jar /my.jar
CMD ["java", "-jar", "/my.jar"]
With that Dockerfile in the same location as your JAR file run:
docker build -t my-app .
Which will create the image, and then to run the app in a container:
docker run my-app
If you want to integrate Docker in your build pipeline, so the output of each build is a new image, then you can either compile the app inside the image (as in Mark O'Connor's comment above; or build the JAR outside of the image and just use Docker to package it, like in the simple example above.
The advantage of the second approach is a smaller image which just has the app without the source code. The advantage of the first is you can build your image on any machine with Docker - you don't need Java installed to build it.