docker container stops after docker run - docker

I have a docker file which when built and run stops. I am trying to run both client and server in one docker container. If there is any solution to use docker-compose, then that is already in place and working fine. Please advise how to keep the container up and running using docker run. Thanks!
Here is my docker file, package.json and screenshot of folder structure.
DockerFile contents:
FROM node:14.14.0-alpine
RUN apk update && apk add bash
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /app
EXPOSE 3000
EXPOSE 4565
CMD ["npm","run","prebuild"]
docker build: command:
docker build -t sample .
docker run command:
docker run -d -it --name sm -v `pwd`:/app sample
Package.json:

Related

Astro in Docker not refresh

I am creating an Astro js container with Docker on windows.
Dockerfile
FROM node:18-alpine3.15
RUN mkdir app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 24678
CMD ["npm","run","dev","--","--host"]
I build my image with the following command
docker build . -t astro
I run my container with this command
docker run --name astro1 -p 24678:24678 -v D:\Workspace\Docker\Practicas\docker-astro-example:/app -v /app/node_modules/ astro
So far without problems but when I make a change in the index.astro document it does not refresh the page to see the changes.

I am able to run the container in docker but unable to view in the browser

I am new to Docker. Firstly, I have created Dockerfile with in source code location.
Here is my Dockerfile
FROM nginx:latest
RUN mkdir /app
COPY . /app
EXPOSE 8000
lately, build an image using: docker build -t mywebapp:v1 .
and i have run the container using following command:
docker run -d -p 8000:8000 mywebapp:v1
problem is : container is running using port 8000, but unable to view in the browser
http://192.168.13.135:8000
please help me out in this problem inorder to view in the browser.

Copy with Dockerfile

I try to run the Angular app inside docker with Nginx:
$ ls
dist Dockerfile
Dockerfile:
FROM nginx
COPY ./dist/statistic-ui /usr/share/nginx/html/
Inside dist/statistic-ui/ all app files.
But COPY command doesn't work, Nginx just starts with default welcome page and when I check files inside /usr/share/nginx/html/ only default Nginx files.
Why COPY command doesn't work and how to fix it?
UPDATE
Run docker container
sudo docker run -d --name ui -p 8082:80 nginx
You need to build an image from your Dockerfile then run a container from that image:
docker build -t angularapp .
docker run -d --name ui -p 8082:80 angularapp
Make sure you include the trailing dot at the end of the docker build command.

Docker: Container should get access to directory of another container

I need to get access to a directory from docker container to another docker container.
In the first container I am running a nodeJS application and in the tests/e2e folder there are my e2e tests and the configuration for webdriverIO.
Also it I don't need a persistend volume - like I've done it so far. I just need the test files as long as both container are running.
$ docker run
--name app_stage
--volume tests:/app/tests
--detach
app:stage
This is the Dockerfile to that application
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
CMD next start
In the second container I'm running webdriverIO, which needs to get the tests and the configuration of the first container stored there in app/tests
$ docker run
--rm
--volumes-from app_stage
webdriverio wdio
But this is not working as I do not see the needed directory in the second container.
First, specify VOLUMEvariable in you dockerfile:
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
VOLUME /app/tests
CMD next start
Use your first command to start app_stage container then start webdriverio container with the second command.

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

Resources