I have currently installed docker 1.9 and I want to create and work on a nginx instance locally on osx and deploy the nginx instance to ubuntu.
All I can find online are conflicting posts from earlier versions of docker.
Can anyone give me a brief overview of how my workflow should be with docker 1.9 to accomplish this?
You can do this by having a simple nginx Dockerfile:
FROM ubuntu:14.04
RUN echo "Europe/London" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y supervisor
ADD supervisor.nginx.conf /etc/supervisor.d/nginx.conf
ADD path/to/your/nginx/config /etc/nginx/sites-enabled/default
EXPOSE 80
CMD /usr/bin/supervisord -n
And a simple supervisor.nginx.conf:
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
Then building your image:
docker build -t nginx .
Then running your nginx container:
docker run -d -v /path/to/nginx/config:/etc/nginx/sites-enabled/default -p 80:80 nginx
This is assuming that you don't have anything running on port 80 on your host - if you do, you can change 80:80 to something like 8000:80 (in the format hostPort:containerPort.
Using -v and mounting your nginx config from your host is useful to do locally as it allows you to make changes to it without having to go into your container / rebuild it every time, but when you deploy to your server you should run a container that uses a config from inside your image so it's completely repeatable on another machine.
Related
I have created dockerfile.
FROM ubuntu:latest
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install curl
RUN apt-get -y install default-jre
RUN curl -O http://archive.apache.org/dist/activemq/5.16.0/apache-activemq-5.16.0-bin.tar.gz
RUN mkdir -p /opt/apache/activemq
RUN tar xvzf apache-activemq-5.16.0-bin.tar.gz -C /opt/apache/activemq
WORKDIR /opt/apache/activemq/apache-activemq-5.16.0/bin
VOLUME /opt/apache/activemq/apache-activemq-5.16.0/conf
RUN echo './activemq start && tail -f /opt/apache/activemq/apache-activemq-5.16.0/data/activemq.log' > start.sh
# Admin interface
EXPOSE 8161
# Active MQ's default port (Listen port)
EXPOSE 61616
CMD ["/bin/bash", "./start.sh"]
I created a docker container like this
docker run --name activemq -p 8161:8161 -p 61616:61616 temp-activemq:5.16.0
I tried to run the admin console as follows
http:://localhost:8161/admin/
http://<IP of the Container>:8161/admin/
Neither of them works
Outside of the container, I installed activeMQ and tried to run admin console, it worked. Can anyone please help me with pointers on how can I get this sorted?
I fixed the above issue with
docker run --rm -d --network host --name activemq temp-activemq:5.16.0
But, I am still researching why the port forwarding is not working?
I had the same issue. In AMQ 5.16.0 they've updated the jetty.xml for the web UI to use 127.0.0.1 instead of 0.0.0.0!
I fixed it by updating the jetty.xml
update under "org.apache.activemq.web.WebConsolePort" in jetty.xml -->
property name="host" value="127.0.0.1"
to
property name="host" value="0.0.0.0"
You'll need to copy and overwrite this file in your docker image and it should work.
I'm new to docker.
I'm setting up nginx server to serve static files inside a docker container. I'd like to enable nginx to start automatically on every startup inside a docker container.
I've tried changing ENTRYPOINT, CMD and crontab when building DockerImage. But these settings to run nginx on every startup works only the first time I "run" a container. When I "stop" the container and "start" it again, nginx does not start automatically inside the container.
I'm looking for a way to start nginx on every startup of a container and my first question is "is it possible to do this?"
My second question is about a container cycle. Given that there are not many discussions on this subject (all discussions are about automatically running a script or sth else at the moment of "run"), I wonder if it is more efficient to "run" and "kill" a container each time than just "stopping" and "starting" a container.
Here are the lines of code I tried for DockerImage (with crontab), which was my first try.
RUN apt-get install -y cron
COPY run_server /etc/cron.d/run_server
RUN chmod 0644 /etc/cron.d/run_server
RUN crontab /etc/cron.d/run_server
RUN touch /var/log/cron.log
CMD cron && tail -f /var/log/cron.log
run_server is a simple crontab config file which includes:
#reboot service nginx start
Since this was not the solution I was looking for (it worked only when I "ran" a conainer, not "stopped" and "started" a container) I tried with supervisor, too.
RUN apt-get -y install supervisor && \
mkdir -p /var/log/supervisor && \
mkdir -p /etc/supervisor/conf.d
ADD supervisor.conf /etc/supervisor.conf
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
supervisor.confg contains:
[supervisord]
nodaemon=true
[program:run_server]
command=/usr/bin/python3.6 /home/server.py
autostart=true
directory=/home
redirect_stderr=true
But neither of them worked the way I wanted ..
my Dockerfile , container ( CentOS 8 and nginx ) linux mint 19.3 Docker version 19.03.4
# howto: Dockerfile
# CentOS 8 and nginx
# docker build -t centose .
# docker run -it -p 80:80 centose
# curl localhost
FROM centos:latest
# MAINTAINER їван
RUN yum -y install nginx
EXPOSE 80
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
#WORKDIR /usr/sbin/
I would try to update my docker file and enable the nginx service so it will be started during the next reboot.
Here is a couple of different ways:
RUN systemctl enable nginx
RUN service nginx start
Another way would be to add a bootstrap script that starts the service:
#!/bin/bash
sudo service nginx start
tail -f /var/log/nginx/error.log
Make sure the bootstrap.sh is executable i.e sudo chmod +x bootstrap.sh.
Then update your docker file:
COPY boostrap.sh /bin/.
CMD ["bootstrap.sh"]
you can apply that using docker restart policy
you can simply set it when creating the container or updating the created ones
examples: docker run -t -d --restart unless-stopped nginx
to update the created ones: docker update --restart unless-stopped {container ID}
I want to run Django in a simple Docker container.
First I built my container with Docker-file. There wasn't anything special in it (only FROM, RUN and COPY commands)
Then I ran my container with command
docker run -tid -p 8000:8000 --name <container_name> <image>
Entered my container:
docker exec -it <container_name> bash
Ran Django server:
python manage.py runserver
Got:
Starting development server at http://127.0.0.1:8000/
But when I go to 127.0.0.1:8000 I see nothing:
The 127.0.0.1 page isn’t working
There are no Nginx or other working servers.
What am I doing wrong?
Update 1 (Dockerfile)
FROM ubuntu:16.04
MAINTAINER Max Malyshev <user>
COPY . /root
WORKDIR /root
RUN apt-get update
RUN apt-get install python-pip -y
RUN apt-get install postgresql -y
RUN apt-get install rabbitmq-server -y
RUN apt-get install libpq-dev python-dev -y
RUN apt-get install npm -y
RUN apt-get install mongodb -y
RUN pip install -r requirements.txt
The problem is that you're exposing the development server to 127.0.0.1 inside your Docker container, not on the host OS.
If you access another console to your container and do a http request to 127.0.0.1:8000 it will work.
The key is to make sure the Docker container exposes the development server to all IPv4 addresses, you can do this by using 0.0.0.0 instead of 127.0.0.1.
Try running the following command to start your Django development server instead:
python manage.py runserver 0.0.0.0:8000
Also, for further inspiration, you can check out this working Dockerfile for hosting a Django application with the built-in development server https://github.com/Niklas9/django-unixdatetimefield/blob/master/Dockerfile.
You need to expose port 8000 in your Dockerfile and run a WSGI server like gunicorn. If you follow the steps here you should be good... https://semaphoreci.com/community/tutorials/dockerizing-a-python-django-web-application
I agree with Niklaus9 comments. If I could suggest an enhancement try
python manage.py runserver [::]:8000
The difference is that [::] supports ipv6 addresses.
I also noticed some packages for mongodb. If you want to test and dev locally you can create docker containers and use docker compose to test your app on your machine before deploying to dev/stage/prod environment.
You can find out more about how to set up a Django app linked to a database backend in docker on this tutorial http://programmathics.com/programming/docker/docker-compose-for-django/ (Disclaimer: I am the creator of that website)
I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443
I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.