I want to access my running container from the browser docker in windows 10 pro
i can't access container IPs directly i try with http://localhost:8000 & http://172.17.0.3:8000
this is my Dockerfile
# syntax=docker/dockerfile:1
FROM node:16.15.1
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
CMD [ "node", "server.js" ]
i start my container from the interface docker
this the log
I forwarded the traffic from port 8080 to container port 3000 with this command it works now
docker run -d -p 8080:3000 expressdocker
Related
I have 2 things : a neo4j database which is deployed on GCP compute engine at this IP bolt://35.241.254.136:7687
And I have a fastapi app that need to access to neo4j.
When I run the API server with uvicorn main:app --reload, all is working correctly. I can access my distant database.
However when I run the api with docker (docker run -d --name mycontainer -p 80:80 myimage), it's impossible to access the neo4j database and I have this error
ServiceUnavailable( neo4j.exceptions.ServiceUnavailable: Couldn't connect to 35.241.254.136:7687 (resolved to ('35.241.254.136:7687',)):
Maybe something is wrong, I don't know about docker.
Here is my dockerfile
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Make sure your container can see the jeo4j IP (35.241.254.136).
Execute the container's bash:
docker exec -it mycontainer bash
then ping the url:
ping 35.241.254.136
If you cannot reach to the host, you need to put your container in same network with jeo4j.
This link may be helpful.
I'm running a vite app inside of my docker container that should be running on localhost:8080.
My docker file is as below:
FROM node:12.13.1-stretch-slim
WORKDIR /app
COPY . ./
RUN npm install
RUN npm update
RUN npm build
EXPOSE 8080
CMD ["/bin/bash", "-c", "npm run serve"]
the npm commands that are relavent here are:
"preview": "vite preview --port 8080",
"serve": "npm-run-all --sequential build preview",
I can build the docker image and when I run the command:
docker run -p 8080:8080 image_tag
I get the following in the terminal
> bcrypt-sandbox#1.0.0 preview /app
> vite preview --port 8080
> Local: http://localhost:8080/
> Network: use `--host` to expose
but when I navigate to localhost:8080 I just get this in chrome
I feel like this has something to do with my docker run -p 8080:8080 image_tag specifically with the port forwarding but I've switched those around for a while and have hit a dead end.
Last note when I run the npm run serve command locally I can go to localhost:8080 and things run as expected.
I have this Dockerfile written.
FROM python:3.6
WORKDIR /usr/src/app
EXPOSE 8080
CMD [ "python3", "-m http.server" ] //even tried CMD [ "python3", "-m", "http.server" ]
I built the image with this:
docker build -t --name server .
and I ran a container from the image like this:
docker run -d -p 8080:8080 --name web server
But when I hit < host-url >:8080
It doesn't work.
Can somebody please help me?
You are trying to run the Python SimpleHTTPServer which is served in port 8000 by default.
Either your Dockerfile should expose 8000 instead of 8080
EXPOSE 8000
Or, change the command to run it in port 8080
CMD ["python3", "-m", "http.server", "8080"]
I cannot expose ttyS5 to ubuntu container.
I tried:
docker run -t -i --privileged -v /dev/ttyS5:/dev/ttyS5 ubuntu /bin/bash
Inside the ubuntu, ttyS5 is a directory not a device node
I confirm ttyS5 is working, I tried to send and return data through ttyS5 and ttyS6(COM6)
Is there anyone know how to fix this issue ?
PS. My system is WIN10+docker desktop+ubuntu 1804 app
You need to add the EXPOSE statement to your Dockerfile for port 8080.
Here's the reference from Docker: https://docs.docker.com/engine/reference/builder/#expose
Your final Dockerfile should look like this:
FROM adzerk/boot-clj
EXPOSE 8080
WORKDIR /app
COPY . /app
I need to get access to a directory from docker container to another docker container.
In the first container I am running a nodeJS application and in the tests/e2e folder there are my e2e tests and the configuration for webdriverIO.
Also it I don't need a persistend volume - like I've done it so far. I just need the test files as long as both container are running.
$ docker run
--name app_stage
--volume tests:/app/tests
--detach
app:stage
This is the Dockerfile to that application
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
CMD next start
In the second container I'm running webdriverIO, which needs to get the tests and the configuration of the first container stored there in app/tests
$ docker run
--rm
--volumes-from app_stage
webdriverio wdio
But this is not working as I do not see the needed directory in the second container.
First, specify VOLUMEvariable in you dockerfile:
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV production
VOLUME /app/tests
CMD next start
Use your first command to start app_stage container then start webdriverio container with the second command.