docker-machine instance not running on port 8080 - docker

I am trying to get this tutorial to work. But at the end of the tutorial it says to run http://192.168.99.100:8080 and see your website. But it is not showing up for me at all after running docker-machine ls. There are no entries at all. So my question is how can I get a docker-machine instance running on an nginx container? I assume that is the one that runs the wwwroot folder, that is what points to it and has all of the port and root calls for the server.
My code in the docker-compose.yml is the same as the tutorial but here it is:
version: '2'
services:
nginx:
build:
context: .
dockerfile: docker.nginx
image: my-nginx
container_name: my-nginx-container
ports:
- "8080:8080"
volumes:
- wwwroot:/wwwroot
webpack:
build:
context: .
dockerfile: docker.webpack
image: my-webpack
container_name: my-webpack-container
ports:
- "35729:35729"
volumes:
- ./app:/app
- /app/node_modules
- wwwroot:/wwwroot
volumes:
wwwroot:
driver: local

From the comments, it has been established that your docker machine is running and the server you are trying to run is running on localhost:8080.
Before you run docker compose up, make sure that the directory contains other required files and folders, examplenginx.conf and index.html.

Related

setup networking of multiple docker containers in different projects using docker-compose

Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

Connection between docker containers: need to put gateway instead of name of container, why?

I was testing things for my own and I had problem: Trying to connect a nodeexpress container(app) to a mongo container(database), I can connect to mongo from MongoCompose if I connect to localhost:27017 but cant into the container of nodeexpress with mongoose configuration url to connect database like this 'mongodb://localhost:27017/dbtest'.
So I look up at SO some solutions (like this) and answers what I see was instead of 'mongodb://localhost:27017/dbtest' I need to write the name of the my container 'mongodb://mymongo:27017/dbtest', but for me this didnt work, only recieve ECONNREFUSED error.
Containers was in the same network, here is my dockerfile and docker-compose file.
Dockerfile
#node 8.16.2
FROM node:8.16.2
COPY . /app
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["npm","start"]
docker-compose.yaml
version: "3.7"
services:
db:
image: mongo
ports:
- 27017:27017
networks:
- testing
app:
build:
context: .
dockerfile: Dockerfile
networks:
- testing
networks:
testing:
I solved this problem like this mongodb://172.17.0.1:27017/dbtest where 172.17.0.1 is the Gateway of the network that are the containers.
Can someone explain this behavior and if it is correct ?
Platform Linux
Where did you get the name mymongo from? You have defined the name of mongodb service as db in your compose file. So use the connection string 'mongodb://db:27017/dbtest'
version: "3.7"
services:
db: ---------------> This is the name of your mongo service
image: mongo
ports:
- 27017:27017
networks:
- testing
app:
build:
context: .
dockerfile: Dockerfile
networks:
- testing
networks:
testing:

Cannot configure nginx reverse proxy with php support in docker compose

I have been attempting to configure nginx reverse proxy with php support in docker compose that runs an app service on port 3838. I want the app to run the nginx-proxy on port 80. I have combed through several tutorials online but none of them has helped me resolve the problem. I also tried to follow this https://github.com/dmitrym0/simple-lets-encrypt-docker-compose-sample/blob/master/docker-compose.yml but it didn't work. Here is my current docker compose file.
docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "82:80"
- "444:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "/etc/nginx/certs"
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
ports:
- 3838:3838
Am I missing something. Sometimes I see virtual_host environment variables include in the docker-compose file. Is that needed? Also do I have to manually configure nginx config files and attach them to the jwilder/nginx-proxy dockerfile? I a newbie at docker and and I really need some help.
Please refer to the Multiple Ports section of the nginx-proxy official docs. In your case, besides setting a mandatory VIRTUAL_HOST env variable (without this a container won't be reverse proxied by the nginx-proxy service), you have to set the VIRTUAL_PORT variable as the nginx-proxy will default to the service running on port 80, but your app service is bind to 3838 port.
Try this docker-compose.yml file to see if it works:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
build:
context: .
dockerfile: ./app/Dockerfile
image: rocker/shiny
container_name: docker-app
restart: always
expose:
- 3838
environment:
- VIRTUAL_HOST=app.localhost
- VIRTUAL_PORT=3838

OSRM Server setup using docker-compose - container exited

I'm trying to setup the OSRM server on the top of Docker. I need to configure it using docker-compose.yml in side my micro-services project using .net core.
version: '3.4'
services:
test_web:
image: ${DOCKER_REGISTRY-}osrmweb
build:
context: .
dockerfile: OSRM_WEB/Dockerfile
ports:
- 1111
osrm-data:
image: irony/osrm5
volumes:
- /data
osrm:
image: irony/osrm5
volumes:
- osrm-data
ports:
- 5000:5000
command: ./start.sh Sweden http://download.geofabrik.de/europe/sweden-latest.osm.pbf
I have this docker-compose.yml file. When I run the docker-compose up There is no error but the container is Exited.
I cant see any response in this url.
https://localhost:5000/route/v1/driving/13.388860,52.517037;13.397634,52.529407;13.428555,52.523219?overview=false
Any thing else I need to do in configurations? Any suggestions?

Resources