Docker volume with typescript - docker

I have a dockerfile for my TS app
FROM node:alpine
WORKDIR /usr
COPY package.json ./
COPY tsconfig.json ./
COPY src ./src
RUN ls -a
RUN npm install
EXPOSE 4005
CMD ["npm","run","dev"]
and I'm able to build it with this command
docker build -t ts-prisma .
and run it like this
docker run -it -p 3000:4005 -v src-prisma:/usr/src ts-prisma
What I want to achieve is to attach a volume to it and everytime I change something in my code change it in the docker.
I mean,
first time I build my app I have and endpoint like this
app.get(
"/",
async (req: Request, res: Response): Promise<Response> => {
return res.status(200).send({
message: "Hello world!!!",
});
}
);
and if I do a curl to ´http://localhost:3000´ it sends me the correct response of
{
message: "Hello world!!!",
}
But if I change that for this
app.get(
"/",
async (req: Request, res: Response): Promise<Response> => {
return res.status(200).send({
message: "This is a new message",
});
}
);
I'm still getting the older message.
I access to the docker with
docker exec -it <id> /bin/sh
and do a cat of the index.ts file
and is still the first version of it, nothing changed.
What am I missing?
I know there is a way to do it with volumes but I couldn't figured out.

Assuming that you want to run node in a container strictly for development, then you want to keep the source on your host drive and not have it in the container at all. So your COPY statements in the Dockerfile aren't necessary. You might actually be able to get away with running a completely standard node image.
Please note that I assume your host is a Linux or MacOS machine. If you're on Windows with WSL2, then doing this is hard(er) since file changes on the Windows file system aren't sent to Linux containers, so the container won't get notified when you make changes to your files.
To make sure that you have the necessary packages installed, we can run
docker run --rm -v $(pwd):/src -w /src node:alpine npm install
That will install the packages you need on your host file system.
Now you can start your development environment with
docker run --rm -v $(pwd):/src -p 3000:4005 -w /src node:alpine npm run dev
Now you should be able to change your files on your host file system and your container should pick up the changes.
When your app is done and you want to create a final image, you can do it with a Dockerfile like this
FROM node:alpine
WORKDIR /src
COPY . ./
RUN npm install
EXPOSE 4005
CMD ["npm", "run", "production"]
That image will have all your source code inside it, so once it's built, no changes to the code will have an effect on it unless you build it again.
To build it and run it, you'd do
docker build -t myimage .
docker run -p 3000:4005 myimage

Related

Docker container only runs dashboard app on localhost:4200 and when localhost:8080 it display the nginx webpage

So I wrote this Dockerfile:
FROM node:13-alpine as build
WORKDIR /app
COPY package*.json /app/
RUN npm install -g ionic
RUN npm install
COPY ./ /app/
RUN npm run build
FROM nginx:alpine
RUN rm -rf /usr/share/nginx/html/*
COPY --from=build /app/dist/ /usr/share/nginx/html/
When it run the command npm run build it is going to create the Dist folder
the second last line is going to remove the things from the folder nginx/html and than the last line is going to replace this folder with the files from the Dist folder, where is the Index.html.
when i run the code:
docker build -t dashboard-app:v1 . it creates the image
Than i run the code: docker run --name dashboard-app-container -d -p 8080:80 dashboard-app:v1
when i go to localhost:8080 it show " NGINX. If you see this page, the nginx web server is succesfully installed and working. Further coonfig. is required"
I dont know if my problem is that docker is not being able to replace the Dist folder and finding the index html or if is some port problem.
When i run it on localhost:4200 i can see the dashboard app.
Any sugestion???
Thank you in advance
It is certainly hard to know what is your Dist folder containing and what was copied over to the nginx/html/ location.
As long as you get a response on port 8080, it means that nginx is running but is not able to find index.html page in the nginx/html/ folder.
What I suggest doing is to run your Docker image with the following command from a terminal. Notice, the -d is removed, you will be able to see the logs from the container:
docker run --name dashboard-app-container -p 8080:80 dashboard-app:v1
In another terminal connect to the image using the following command:
docker exec -it dashboard-app:v1 sh
This will open a shell to the container. You will have to navigate to /usr/share/nginx/html location and investigate its content. You will be able to see what was copied over from the Dist folder and adjust the Dockerfile aftewards.

how to override the files in docker container

I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

How to run a docker container created with go binary?

I am trying to create a docker container with a Dockerfile and a go file binary.
I have two files in my folder: Dockerfile and main, where the latter is a binary of my simple go file.
Contents of Dockerfile:
FROM golang:1.11-alpine
WORKDIR /app
COPY main /app/
RUN ["chmod", "+x", "/app/main"]
ENTRYPOINT ["./main"]
I tried following steps:
sudo docker build -t naive5cr .
sudo docker run -d -p 8080:8080 naive5cr
The error which i see in thru "docker logs " :
standard_init_linux.go:207: exec user process caused "no such file or directory"
my go file content [i think it is irrelevant to the problem]:
func main() {
http.HandleFunc("/", index)
http.ListenAndServe(port(), nil)
}
func port() string {
port := os.Getenv("PORT")
if len(port) == 0 {
port = "8080"
}
return ":" + port
}
the binary "main" runs as expected when run standalone. so there is no problem with the content of go file.
You need to compile with CGO_ENABLED=0 to prevent links to libc on Linux when networking is used in Go. Alpine ships with musl rather than libc, and attempts to find libc result in the no such file or directory error. You can verify this by running ldd main to see the dynamic links.
You can also build on an Alpine based host to link to musl instead of libc. The advantage of a completely statically compiled binary is the ability to run on scratch, without any libraries at all.
go compiles down to native code, so make sure to build your go code on the Docker image, instead of copying the binary to the docker image.
e.g.
FROM golang:1.11-alpine
WORKDIR /app
ADD . /app
RUN cd /app && go build -o goapp
ENTRYPOINT ./goapp
Also as a bonus, here is how to create really tiny Docker images with multistage Docker builds:
FROM golang:1.11-alpine AS build-env
ADD . /src
RUN cd /src && go build -o goapp
FROM alpine
WORKDIR /app
COPY --from=build-env /src/goapp /app/
ENTRYPOINT ./goapp

docker -v no more needed? and Dockerfile

I've read tutorials about use docker:
docker run -it -p 9001:3000 -v $(pwd):/app simple-node-docker
but if i use:
docker run -it -p 9001:3000 simple-node-docker
it's working too? -v is not more needed? or is taking from the Dockerfile the line WORKDIR?
FROM node:9-slim
# WORKDIR specifies the directory our
# application's code will live within
WORKDIR /app
another tutorials use mkdir ./app on the workfile, anothers don't, so WORKDIR is enough to docker create the folder automatically if does not exist
There are two common ways to get application content into a Docker container. Many Node tutorials I've seen confusingly do both of them. You don't need docker run -v, provided you docker build your container when you make changes.
The first way is to copy a static copy of the application into the image. You'd do this via a Dockerfile, typically looking something like this:
FROM node
WORKDIR /app
# Install only dependencies now, to make rebuilds faster
COPY package.json yarn.lock ./
RUN yarn install
# NB: node_modules is in .dockerignore so this doesn't overwrite
# the previous step
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
The resulting Docker image is self-contained: if you have just the image (maybe you docker pulled it from a repository) you can run it, as you note, without any special -v option. This path has the downside that you need to re-run docker build to recreate the image if you've made any changes.
The second way is to use docker run -v to inject the current source directory into the container. For example:
docker run \
--rm \ # clean up after we're done
-p 3000:3000 \ # publish a port
-v $PWD:/app \ # mount current directory over /app
-w /app \ # set default working directory
node \ # image to run
yarn start # command to run
This path hides everything in the /app directory in the image and replaces it in the container with whatever you have in your current directory. This requires you to have a built functional copy of the application's source tree available, and so it supports things like live reloading; helpful for development, not what you want for Docker in production.
Like I say, I've seen a lot of tutorials do both things:
# First build an image, populating /app in that image
docker build -t myimage .
# Now run it, hiding whatever was in /app
docker run --rm -p3000:3000 -v$PWD:/app myimage
You don't need the -v option, but you do need to manually rebuild things if your application changes.
$EDITOR src/file.js
yarn test
sudo docker build -t myimage .
sudo docker run --rm -p3000:3000 myimage
As I note here the docker commands require root-equivalent permission; but on the flip side the final docker run command is very close to what you'd run "for real" (maybe via Docker Compose or Kubernetes, but without requiring a copy of the application source).

Resources