I want to run Django in a simple Docker container.
First I built my container with Docker-file. There wasn't anything special in it (only FROM, RUN and COPY commands)
Then I ran my container with command
docker run -tid -p 8000:8000 --name <container_name> <image>
Entered my container:
docker exec -it <container_name> bash
Ran Django server:
python manage.py runserver
Got:
Starting development server at http://127.0.0.1:8000/
But when I go to 127.0.0.1:8000 I see nothing:
The 127.0.0.1 page isn’t working
There are no Nginx or other working servers.
What am I doing wrong?
Update 1 (Dockerfile)
FROM ubuntu:16.04
MAINTAINER Max Malyshev <user>
COPY . /root
WORKDIR /root
RUN apt-get update
RUN apt-get install python-pip -y
RUN apt-get install postgresql -y
RUN apt-get install rabbitmq-server -y
RUN apt-get install libpq-dev python-dev -y
RUN apt-get install npm -y
RUN apt-get install mongodb -y
RUN pip install -r requirements.txt
The problem is that you're exposing the development server to 127.0.0.1 inside your Docker container, not on the host OS.
If you access another console to your container and do a http request to 127.0.0.1:8000 it will work.
The key is to make sure the Docker container exposes the development server to all IPv4 addresses, you can do this by using 0.0.0.0 instead of 127.0.0.1.
Try running the following command to start your Django development server instead:
python manage.py runserver 0.0.0.0:8000
Also, for further inspiration, you can check out this working Dockerfile for hosting a Django application with the built-in development server https://github.com/Niklas9/django-unixdatetimefield/blob/master/Dockerfile.
You need to expose port 8000 in your Dockerfile and run a WSGI server like gunicorn. If you follow the steps here you should be good... https://semaphoreci.com/community/tutorials/dockerizing-a-python-django-web-application
I agree with Niklaus9 comments. If I could suggest an enhancement try
python manage.py runserver [::]:8000
The difference is that [::] supports ipv6 addresses.
I also noticed some packages for mongodb. If you want to test and dev locally you can create docker containers and use docker compose to test your app on your machine before deploying to dev/stage/prod environment.
You can find out more about how to set up a Django app linked to a database backend in docker on this tutorial http://programmathics.com/programming/docker/docker-compose-for-django/ (Disclaimer: I am the creator of that website)
Related
I have a 'mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim' Docker container, with 2 things on it:
-a custom SDK running on it
-a dotnet5 application that connects to the SDK with TCP
If I connect the the Docker's bash, I can use telnet localhost 54321 to connect to my SDK successfully
If I run the Windows SDK version on my development computer (Windows), and I run my application IIS Express instead of Docker, I can successfully connect with a telnet library (host 'localhost', port '54321'), this works
However, I want to run both the SDK and my dotnet application in a Docker container, and when I try to connect from inside the container (the same thing that works on the IIS hosted version), this does not work. By running 'telnet localhost 54321' from the docker container commandline I can confirm that the SDK is running. What am I doing wrong?
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
RUN apt-get update && apt-get install -y telnet
RUN apt-get update && apt-get install -y libssl1.1
RUN apt-get update && apt-get install -y libpulse0
RUN apt-get update && apt-get install -y libasound2
RUN apt-get update && apt-get install -y libicu63
RUN apt-get update && apt-get install -y libpcre2-16-0
RUN apt-get update && apt-get install -y libdouble-conversion1
RUN apt-get update && apt-get install -y libglib2.0-0
RUN mkdir /sdk
COPY ["Server/Sdk/SomeSDK*", "sdk/"]
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Server/MyProject.Server.csproj", "Server/"]
COPY ["Shared/MyProject.Shared.csproj", "Shared/"]
COPY ["Client/MyProject.Client.csproj", "Client/"]
RUN dotnet restore "Server/MyProject.Server.csproj"
COPY . .
WORKDIR "/src/Server"
RUN dotnet build "MyProject.Server.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProject.Server.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProject.Server.dll"]
Code (working on IIS, when SDK is running on Windows, but not when both SDK and code are running inside the same Docker container):
var telnetClient = new TelnetClient("localhost", 54321, TimeSpan.FromSeconds(1), new CancellationToken());
await telnetClient.Connect();
Thread.Sleep(2000);
await telnetClient.Send("init");
Command line (working BOTH from windows CLI, and from Docker bash (so also when the code is not working, this is working):
$telnet localhost 54321
$init
The issue might be (but I'm not sure, as I've received this result from using direct command line '$telnet localhost 54321' from within dotnet: 'telnet: connection refused by remote host'
Make sure you use in your docker run command also --network=host to use the same network if you want to reach out of the container or create a bridge with --network=bridge (for example with another container).
By default the Docker container is spawn on a separate, dedicated and private subnet on your machine (mostly 172.17.0.0/16) which is different from your machine's default/local subnet (127.0.0.0/8).
For connecting into the host's subnet in this case 127.0.0.0/8 you need --network=host. For communication within the same container though it's not necessary and works out of the box.
For accessing the service in container from the outside you need to make sure you have your application's port published either with --publish HOST_PORT:DOCKER_PORT or --publish-all (random port(s) on host are then assigned, check with docker ps).
Host to container (normal)
# host
telnet <container's IP> 8000 # connects, typing + return shows in netcat stdout
# container
docker run --rm -it --publish 8000:8000 alpine nc -v -l -p 8000
Container to host (normal)
# host
nc -v -l -p 8000
# container, docker run -it alpine
apk add busybox-extras
telnet localhost 8000 # hangs, is dead
Container to host (on host network)
# host
nc -v -l -p 8000
# container, docker run -it --network=host alpine
apk add busybox-extras
telnet localhost 8000 # connects, typing + return shows in netcat stdout
Within container
# start container
docker run -it --name alpine alpine
apk add busybox-extras
# exec into container to create service on 4000
docker exec -it alpine nc -v -l -p 4000
# exec into container to create service on 5000
docker exec -it alpine nc -v -l -p 5000
# telneting from the original terminal (the one with apk add)
telnet localhost 4000 # connects, works
telnet localhost 5000 # connects, works
I am deploying an angular app inside an ubuntu container which is hosted on windows 10. Here is my dockerfile:
FROM ubuntu:latest
COPY app /app
RUN apt-get update
RUN apt-get install -y curl software-properties-common python-pip
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm install -g #angular/cli
RUN pip install -r /app/requirements.txt
WORKDIR /app/ng
RUN npm install
RUN npm rebuild node-sass
EXPOSE 4200
WORKDIR /
RUN touch start.sh
RUN echo "python /app/server.py &" >> start.sh
RUN echo "cd /app/ng" >> start.sh
RUN echo "ng serve" >> start.sh
RUN chmod +x start.sh
ENTRYPOINT ["/bin/bash"]
CMD ["start.sh"]
I run the image using
docker run -p 4200:4200 --name=test app
Now, the problem is, I am running this container on windows 10 with which I am not very familiar from the docker's perspective of networking. If I would have been running Linux, then I could have easily accessed the app through any browser by visiting http://localhost:4200 but this is not the case on windows 10 it seems. When I try to access my app through chrome, I get
This site can’t be reached
localhost refused to connect.
ERR_CONNECTION_REFUSED
I tried to search and found similar issue on docker forums. Taking the suggestions there I tried to access the container through my IPv4 address but failed. I also tried using docker NAT IP 10.0.75.1 but no results. I got hold of the container IP through docker inspect test and used the container IP 172.17.0.2 but that too didn't work.
Output of curl from host:
E:\app>curl localhost:4200
curl: (7) Failed to connect to localhost port 4200: Connection refused
E:\app>curl 0.0.0.0:4200
curl: (7) Failed to connect to 0.0.0.0 port 4200: Address not available
If I curl inside the container, it works as expected
root#97cd2c1e6784:/# curl localhost:4200
<!doctype html>
<html lang="en">
<body>
<p>Test app</p>
</body>
</html>
How to access my angular app from windows host browser? If you want more information please ask for it in comments.
After searching more, I got an answer here. I just needed to start the angular server using ng serve --host 0.0.0.0 instead of just ng serve so that the application runs on all the network interfaces instead of just the loop interface.
SOLVED: I just needed to restart the machine :).
I had a similar issue (but I am not using Docker). When I run the command "ng serve --host 0.0.0.0" I can curl it inside the ubuntu WSL2 instance (screenshot attached) and it works fine!!, but I cannot use the browser (on Windows 10 build 19043) it always returns "ERR_CONNECTION_REFUSED". Any ideas?
I am trying to make a simple docker container that runs the Rails app from the directory that I launch it in.
Everything appears to be fine except when I run the container and try to access it from my Windows host at the IP address that Docker Machine gives me, it responds with a connection refused error message.
I even used the Nginx Dockerfile as a reference, because the Nginx Dockerfile actually builds a container that is accessible for me.
Here is my Dockerfile so far:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -p 80
EXPOSE 80
I build the image using this command
docker build -t rails_server .
I then run it using this command
docker run -d -p 80:80 rails_server
And here is what I try to access the webpage:
curl $(docker-machine ip)
And this is what I get back:
curl: (7) Failed to connect to 192.168.99.100 port 80: Connection refused
And this is how it makes me feel:
The problem here seems to be that the app is listening on 127.0.0.1:80, so the service will not accept connection from outside the container. Could you check if modifying the rails server to listening on 0.0.0.0 the issue solves?
You can do that using the -b flag of rails s:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -b 0.0.0.0 -p 80
EXPOSE 80
The port is only exposed to the vm running the docker inside. You have to still expose port 80 of your vm to your local machine so it can connect to it. I think the best approach is making your container to be listened o an optional port like 7070 and then using a simple nginx proxy pass to feed the content to the outside (listening on port 80)
I have currently installed docker 1.9 and I want to create and work on a nginx instance locally on osx and deploy the nginx instance to ubuntu.
All I can find online are conflicting posts from earlier versions of docker.
Can anyone give me a brief overview of how my workflow should be with docker 1.9 to accomplish this?
You can do this by having a simple nginx Dockerfile:
FROM ubuntu:14.04
RUN echo "Europe/London" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y supervisor
ADD supervisor.nginx.conf /etc/supervisor.d/nginx.conf
ADD path/to/your/nginx/config /etc/nginx/sites-enabled/default
EXPOSE 80
CMD /usr/bin/supervisord -n
And a simple supervisor.nginx.conf:
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
Then building your image:
docker build -t nginx .
Then running your nginx container:
docker run -d -v /path/to/nginx/config:/etc/nginx/sites-enabled/default -p 80:80 nginx
This is assuming that you don't have anything running on port 80 on your host - if you do, you can change 80:80 to something like 8000:80 (in the format hostPort:containerPort.
Using -v and mounting your nginx config from your host is useful to do locally as it allows you to make changes to it without having to go into your container / rebuild it every time, but when you deploy to your server you should run a container that uses a config from inside your image so it's completely repeatable on another machine.
I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.