I have a very simple Docker image that should contain both NodeJS and OpenJDK (to build Cordova apps).
So far so good, all works well, the versions are correct and the command line knows all the binaries.
But when I use a multi-stage build, the last image from Alpine is unable to find the APK command (the package manager), and I need it.
My current Dockerfile is
FROM node:current-slim AS node
COPY --from=openjdk:latest . .
FROM alpine:latest
COPY --from=node . .
ENV JAVA_HOME=/usr/java/openjdk-14
WORKDIR /usr/src/app
RUN npm i -g #angular/cli cordova
CMD ["bash"]
When I try to run
apk add unzip
The error bash: apk: command not found pops up.
Running
docker run -it --rm alpine:latest
Allows me to use APK.
It seems the binaries are overriden and I cannot use them anymore.
Is there a way of doing this ? (I'm quite new to Docker)
My requirements are for the following commands to work without any error when running docker run -it --rm --name myContainer myImage :
npm -v
node -v
java -version
apk --help
Related
I am creating an Astro js container with Docker on windows.
Dockerfile
FROM node:18-alpine3.15
RUN mkdir app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 24678
CMD ["npm","run","dev","--","--host"]
I build my image with the following command
docker build . -t astro
I run my container with this command
docker run --name astro1 -p 24678:24678 -v D:\Workspace\Docker\Practicas\docker-astro-example:/app -v /app/node_modules/ astro
So far without problems but when I make a change in the index.astro document it does not refresh the page to see the changes.
So I want to install packages locally using npm from docker but how do I specify the image?
I tried doing following:
docker run --rm -v $(pwd):/app npm install
If I understand you correctly, you can use the node image. The node image usually runs in the root directory, so you need to change into the /app directory first. Something like this
docker run --rm -v $(pwd):/app node /bin/bash -c "cd /app && npm install"
I'm building a Dockerfile and files in the container are not getting synced with local storage.
Dockerfile:
FROM maven:3.6.1-jdk-8
ENV HOME=\wc_console
RUN mkdir $HOME
ADD . $HOME
WORKDIR $HOME
RUN mvn clean install -T 2C -DskipTests=true
RUN mvn dependency:go-offline -B --fail-never
CMD mvn clean install -T 2C -DskipTests=true
My docker build command:
docker build -f build_maven_docker . -t wc_console_build:1.0
I want to use bind-mount because after the container runs, I need the output on my local directory.
My docker run command:
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
My current working directory in the local machine while running docker is:e:\svn\daffodil-dev-3.4.1\whitecoats-admin
My work directory in the Docker container:wc_console
But, whenever I run the docker container, it is not syncing the final output to my local directory back.
What am I doing wrong?
Image for folder visulization.
Instead of using \wc_console in your Dockerfile's ENV HOME=\wc_console, use /wc_console. Linux uses forward slashes for directory structuring. The same goes for your docker run command. Change
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:\wc_console wc_console_build:1.0
to
docker run -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console wc_console_build:1.0
When you mount the volume you actually replace the contents of the /wc_console with whatever you have on your host.
If you want to get the artefacts generated by maven then you need to run the maven commands on the running container, not as part of the build process.
When you do this you also don't need to add your sources to the image at build time.
FROM maven:3.6.1-jdk-8
ENV HOME=/wc_console
WORKDIR $HOME
# Make this part of the ENTRYPOINT if you really need it
#RUN mvn dependency:go-offline -B --fail-never
ENTRYPOINT mvn clean install -T 2C -DskipTests=true
That being said, for what you need you don't even really need a Dockerfile:
docker run --rm -v e:\svn\daffodil-dev-3.4.1\whitecoats-admin:/wc_console --workdir /wc_console maven:3.6.1-jdk-8 mvn clean install -T 2C -DskipTests=true
Does Docker Desktop Community version for Windows caches the containers?
I was removing some of my containers and then trying to compose them again for a Python 3/Flask/Angular 7 application and it was turning them up without installing dependencies pretty fast. I had to remove containers then restart my machine for it to build the containers again.
I was running this command:
docker-compose up --build
Yes I have a docker-compose.yml. I also have Dockerfile with commands to install the dependencies.
FROM python:3.7
RUN mkdir -p /var/www/flask
Update working directory
WORKDIR /var/www/flask
copy everything from this directory to server/flask docker container
COPY . /var/www/flask/
Give execute permission to below file, so that the script can be executed
by docker.
RUN chmod +x /var/www/flask/entrypoint.sh
Install the Python libraries
RUN pip3 install --no-cache-dir -r requirements.txt
COPY uswgi.ini
COPY ./uwsgi.ini /etc/uwsgi.ini
EXPOSE 5000
run server
CMD ["./entrypoint.sh"]
I also tried following commands:
docker system prune
docker-compose up --build --force-recreate
I have a simple docker file:
FROM ubuntu:16.04
MAINTAINER T-vK
RUN useradd -m -s /bin/bash -g dialout esp
USER esp
WORKDIR /home/esp
COPY ./entrypoint_script.sh ./entrypoint_script.sh
ENTRYPOINT ["/home/esp/entrypoint_script.sh"]
when I run docker build . followed by docker run -t -i ubuntu and look for the directory /home/esp it is not there! The whole directory including it's files seem to be gone.
Though, when I add RUN mkdir /home/esp to my docker file, it won't build telling me mkdir: cannot create directory '/home/esp': File exists.
So what am I misunderstanding here?
I tested this on Debian 8 x64 and Ubuntu 16.04 x64.
With Docker version 1.12.2
Simply change you Docker build command to:
docker build -t my-docker:dev .
And then to execute:
docker run -it my-docker:dev
Then you'll get what you want. you didn't tag docker build so you're actually running Ubuntu image.