Unable to access site in docker-compose - docker

I've been using: docker build -t devstack .
docker run --rm -p 443:443 -it -v ~/code:/code devstack
That has been working fine for me so far. I've been able to access the site as expected through my browser. I set my hosts file to point devstack.com to 127.0.0.1 and the site loads nicely. Now I'm trying to use docker-compose so I can use some of the functionality there to more easily connect with AWS.
services:
web:
build:
context: .
network_mode: "bridge"
ports:
- "443"
- "80"
volumes:
- ~/code:/code
image: devstack:latest
So I run docker-compose build which gives me the familiar build stuff from Dockerfile.
Then I run docker-compose run web which puts me into the VM where I start apache (doing it manually at the moment), hit top to verify it’s running, then tail the log files. But when I attempt to hit the site in my browser, I get: devstack.com refused to connect. and no logs in the apache log files, so it's not even getting to apache. So something about the ports isn't opening up to me. Any idea what I need to change to make this work?
Edit: Updated file. Still same problem:
version: "3"
services:
web:
build:
context: .
# Same issue with both of these:
# network_mode: "bridge"
# network_mode: "host"
ports:
- "443:443"
- "80:80"
volumes:
- ~/code:/code
tty: true

This is what I did to get it working. I used the example project docker-compose showed in their documentation, which runs a test app on port 5000. That worked, so I knew it could be done.
I updated my docker-compose.yml to be very similar to the one in the test project. So it looks like this now:
version: "3"
services:
web:
build: .
ports:
- "443:443"
- "80:80"
volumes:
- ~/code:/code
Then I created an entry.sh file which will start apache, and added this to my Dockerfile:
# copy the entry file which will start apache
COPY entry.sh entry.sh
RUN chmod +x entry.sh
# start apache
CMD ./entry.sh; tail -f /var/log/apache2/*.log
So now when I do docker-compose up, it will start apache and tail the apache log files. So I immediately see apache log files output to terminal. Then I'm able to access the site. Basically the problem was just with the VM exiting. This was the only way I could find to keep it from exiting without doing tty=true in the docker-compose, which while it kept it from exiting, wouldn't publish the ports.

Related

How do I run a website in bitnami+docker+nginx

I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/

Running shell script against Localstack in docker container

I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'

Go applications fails and exits when running using docker-compose, but works fine with docker run command

I am running all of these operations on a remove server that is a
VM running Ubuntu 16.04.5 x64.
My Go project's Dockerfile looks like:
FROM golang:latest
ADD . $GOPATH/src/example.com/myapp
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
#CMD ["./myapp"]
When I run the docker container using docker-compose up -d, the Go application exits and I see this in the docker logs:
myapp_1 | /bin/sh: 1: ./myapp: Exec format error docker_myapp_1
exited with code 2
If I locate the image using docker images and run the image like:
docker run -it 75d4a95ef5ec
I can see that my golang applications runs just fine:
viper environment is: development HTTP server listening on address:
":3005"
When I googled for this error some people suggested compiling with some special flags but I am running this container on the same Ubuntu host so I am really confused why this isn't working using docker.
My docker-compose.yml looks like:
version: "3"
services:
openresty:
build: ./openresty
ports:
- "80:80"
- "443:443"
depends_on:
- myapp
env_file:
- '.env'
restart: always
myapp:
build: ../myapp
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
ports:
- "3005:3005"
depends_on:
- db
- redis
- memcached
env_file:
- '.env'
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- "/home/deploy/v/redis:/data"
restart: always
memcached:
image: memcached
ports:
- "11211:11211"
restart: always
db:
image: postgres:9.4
volumes:
- "/home/deploy/v/pgdata:/var/lib/postgresql/data"
restart: always
Your docker-compose.yml file says:
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
which means your host system's source directory is mounted over, and hides, everything that the Dockerfile builds. ./myapp is the host's copy of the myapp executable and if something is different (maybe you have a MacOS or Windows host) that will cause this error.
This is a popular setup for interpreted languages where developers want to run their application without running a normal test-build-deploy sequence, but it doesn't really make sense for a compiled language like Go where you don't have a choice. I'd delete this block entirely.
The Go container stops running because of this:
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
You are switching directories to $GOPATH/src/example.com/myapp where you build your app, however, your entry point is pointing to the wrong location.
To solve this, you either copy the app into the root directory and keep the same ENTRYPOINT command or you copy the application to a different location and pass the full path such as:
ENTRYPOINT /my/go/app/location

Turning down a web server from a running container via bash

I have the following docker-compose.yml to start a webserver with PHP.
version: "2.0"
services:
nginx:
image: nginx
ports:
- "8000:80"
volumes:
- ./web:/web
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
php:
image: php:${PHP_VERSION}-fpm
volumes:
- ./web:/web
After running docker-compose up, I can access my website perfectly at http://localhost:8000. But if I then access the nginx container, with:
$ docker-compose run nginx bash
and within the container I run:
$ service nginx stop
I still can see the website http://localhost:8000 being displayed in the browser.
How can it be that after stopping the server in the container, the website is still being delivered?
The docker-compose run command starts a new container, you're stopping nginx in that new container, what you want is docker attach nginx
The documentation is located here.

Dockerfile and docker-compose not updating with new instructions

When I try to build a container using docker-compose like so
nginx:
build: ./nginx
ports:
- "5000:80"
the COPY instructions isnt working when my Dockerfile simply
looks like this
FROM nginx
#Expose port 80
EXPOSE 80
COPY html /usr/share/nginx/test
#Start nginx server
RUN service nginx restart
What could be the problem?
It seems that when using the docker-compose command it saves an intermediate container that it doesnt show you and constantly reruns that never updating it correctly.
Sadly the documentation regarding something like this is poor. The way to fix this is to build it first with no cache and then up it like so
docker-compose build --no-cache
docker-compose up -d
I had the same issue and a one liner that does it for me is :
docker-compose up --build --remove-orphans --force-recreate
--build does the biggest part of the job and triggers the build.
--remove-orphans is useful if you have changed the name of one of your services. Otherwise, you might have a warning leftover telling you about the old, now wrongly named service dangling around.
--force-recreate is a little drastic but will force the recreation of the containers.
Reference: https://docs.docker.com/compose/reference/up/
Warning I could do this on my project because I was toying around with really small container images. Recreating everything, everytime, could take significant time depending on your situation.
If you need to make docker-compose to copy files every time on up command I suggest declaring a volumes option to your service in the compose.yml file. It will persist your data and also will copy files from that folder into the container.
More info here volume-configuration-reference
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
Optionally, you can add the following section to the end of the compose.yml file. It will keep that folder persisted then. The data in that folder will not be removed after the docker-compose stop command or the docker-compose down command. To remove the folder you will need to run the down command with an additional flag -v:
docker-compose down -v
For example, including volumes:
services:
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
volumes: # at the root level, the same as services
server_data:

Resources