How to create docker images and run it in an EC2? - docker

I'm very much new to the Docker world. I have a docker-compose file which works fine for me.
But how do I create these Docker images and run it in an EC2.
Any help would be appreciated.
PS: I don't want to use ECS or ECR for this. I hope DockerHub should work fine for storing and retrieving these images (Correct me if I'm wrong).
Thanks.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
app_2:
image: node:12.13.1
volumes:
- ../app_2/:/app
working_dir: /app_2
depends_on:
- mongo
- nats
links:
- mongo
environment:
NODE_ENV: development
ports:
- 4000:4000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server

Install docker, docker-compose and then just run docker-compose up. Just remember to open the port 4000 of the EC2 instance and accessible from your IP or any needed ip's.

Related

Get seen on a local network Docker-Composer

I am currently working on a mobile app that connects to a server instance in docker through a docker-compse instance that can be see by an emulator on my developemnt machine fine, but if I try and use my mobile I can't see the server as it is not on the same network. is there easy way I can set this up to so it can be seen by both my emulator and my mobile at the same time.
my Docker composer setup is
version: '3.1'
services:
node:
container_name: nodejs
build: .
#restart: always
ports:
- 8080:8080
- 3000:3000
volumes:
- .:/usr/src/app
environment:
PORT: 3000
extra_hosts:
- "nodeserver:10.1.1.222"
depends_on:
- mongo
mongo:
container_name: mongodb
image: mongo
restart: always
ports:
- 27017:27017
volumes:
- ./db:/data/db
command: mongod
mongo-express:
container_name: mongoExpress
image: mongo-express
restart: always
ports:
- 9081:8081
environment:
ME_CONFIG_MONGODB_USERNAME: admin
ME_CONFIG_MONGODB_PASSWORD: password
depends_on:
- mongo
I am not a big net-ops guy some so any real help here would appreciated.

Why Dockerfile doesn't run multiple commands

I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.

Docker machine copying all of the host machine files?

I'm fairly new to docker, but I recently discovered something that I just can't wrap my head around. I started a docker machine:
docker-machine create -d virtualbox machine_name
Created a docker-compose file for my application:
version: '3.3'
services:
client:
container_name: client
build:
context: ./services/client
dockerfile: Dockerfile
volumes:
- './services/client:/usr/src/app'
ports:
- '3007:3000'
environment:
- NODE_ENV=development
depends_on:
- project
links:
- project
db:
container_name: db
build:
context: ./services/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
project:
container_name: project
build: ./services/project
volumes:
- './services/project:/usr/src/app'
- './services/project/package.json:/usr/src/app/package.json'
ports:
- 3000:3000
environment:
- DATABASE_URL=postgres://postgres:postgres#db:5432/esports_manager_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/esports_manager_test
- NODE_ENV=${NODE_ENV}
- TOKEN_SECRET=tempsectre
depends_on:
- db
links:
- db
and then I ssh'd into the docker machine to find my entire filesystem. Is this intended behaviour, I can't seem to find anything in the docs that talks about it.

nginx-proxy : how to expose the proxy over the internet on AWS?

Firstly thank you for your time .
i was trying my hands on docker.
when i saw this article
http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
please have a look at my docker-compose.yml file , i am using below images
jwilder/nginx-proxy:latest
grafana/grafana:4.6.2
version: "2"
services:
proxy:
build: ./proxy
container_name: proxy
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
grafana:
build: ./grafana
container_name: grafana
volumes:
- grafana-data:/var/lib/grafana
environment:
VIRTUAL_HOST: grafana.localhost
GF_SECURITY_ADMIN_PASSWORD: password
depends_on:
- proxy
volumes:
grafana-data:
so when i do docker-compose up -d on my local system i am able to access the grafana container.
Now i have deploy this docker app on aws how do i access the grafana container on ec2 with VIRTUAL_HOST
any help or idea how to do this will be appreciated ! Thanks !

How to run Docker container in it's own network

Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80

Resources