I am creating an Astro js container with Docker on windows.
Dockerfile
FROM node:18-alpine3.15
RUN mkdir app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 24678
CMD ["npm","run","dev","--","--host"]
I build my image with the following command
docker build . -t astro
I run my container with this command
docker run --name astro1 -p 24678:24678 -v D:\Workspace\Docker\Practicas\docker-astro-example:/app -v /app/node_modules/ astro
So far without problems but when I make a change in the index.astro document it does not refresh the page to see the changes.
Related
Docker version 20.10.21
docker run command with -v option works as expected when the destination path is other than /app. But when the destination path is /app it doesn't work as expected.
command works as expected:
docker run -d -v ${pwd}:/app2 react-app
command not works as expected:
docker run -d -v ${pwd}:/app react-app
as seen in the snapshot there is not port for the second container
here is Dockerfile content
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
RUN mkdir data
COPY package*.json .
RUN npm install
COPY . .
ENV API_URL=http://api.myapp.com/
EXPOSE 3000
CMD [ "npm", "start" ]
You are running npm install in /app in the Dockerfile, but then at runtime you are mounting pwd over the files you installed in /app during the build process. Don't install your dependencies in /app during the build if you want to mount to /app at runtime.
Please try using $(pwd) instead of ${pwd}. Also if you are running it under Windows then you probably need to use some shell which implements pwd command correctly. E.g. Git Bash.
docker run -d -v $(pwd):/app react-app
Also once you start the container please check docker container inspect <container ID>, specifically Mounts section.
Or you can filter the output:
docker container inspect <container ID> -f '{{ .Mounts }}'
Also if you see that container exits immediately, please check its logs with
docker logs <container ID>
I solved it by excluding the node_modules from the mounting as:
docker run -d -v ${pwd}:/app -v /app/node_modules react-app
I have a docker file which when built and run stops. I am trying to run both client and server in one docker container. If there is any solution to use docker-compose, then that is already in place and working fine. Please advise how to keep the container up and running using docker run. Thanks!
Here is my docker file, package.json and screenshot of folder structure.
DockerFile contents:
FROM node:14.14.0-alpine
RUN apk update && apk add bash
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /app
EXPOSE 3000
EXPOSE 4565
CMD ["npm","run","prebuild"]
docker build: command:
docker build -t sample .
docker run command:
docker run -d -it --name sm -v `pwd`:/app sample
Package.json:
Vue CLI version is ~5.0.0. Vue version is 3.
This is my docker file
FROM node:lts-alpine
WORKDIR /app
COPY package*.json .
RUN ["npm", "install"]
COPY . .
EXPOSE 8080
CMD ["npm", "run", "serve"]
I build the image (image name is web) and run it with
docker run -p 8080:8080 --rm -it -v ${pwd}:/app -v /app/node_module web
If I make changes to source code I see no changes in browser. Inside container, if I inspect files, I can see that they have been changed, but I see no changes in browser.
I've seen examples of people running Vue container the following way:
docker run -p 8080:8080 -e CHOKIDAR_USEPOLLING=true -e HOST=0.0.0.0 --rm -it -v ${pwd}:/app -v /app/node_module web
However above doesn't really fix the problem.
I think it has something to do with web sockets but I'm kind of lost on what to try.
I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json
I need to get access to a directory from docker container to another docker container.
In the first container I am running a nodeJS application and in the tests/e2e folder there are my e2e tests and the configuration for webdriverIO.
Also it I don't need a persistend volume - like I've done it so far. I just need the test files as long as both container are running.
$ docker run
--name app_stage
--volume tests:/app/tests
--detach
app:stage
This is the Dockerfile to that application
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
CMD next start
In the second container I'm running webdriverIO, which needs to get the tests and the configuration of the first container stored there in app/tests
$ docker run
--rm
--volumes-from app_stage
webdriverio wdio
But this is not working as I do not see the needed directory in the second container.
First, specify VOLUMEvariable in you dockerfile:
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
VOLUME /app/tests
CMD next start
Use your first command to start app_stage container then start webdriverio container with the second command.