Access Go Api Hosted in Docker container - docker

I have created a Go backend server that contains an api. All works well when running locally "go run server". I did however encounter issues when running it in docker. So I created a Dockerfile and run the linux container in networkmode host but can't access the api in the browser. If it's working I should be able to see a json response from http://localhost:8500/status. So I'm thinking I need permissions or add flags or more installation related scripts. I have been testing different flags and ports in docker but can't identify the issue. See code, dockerfile and command below.
Note: When I run the program locally on windows a security warning pops up, perhaps this is related to the issue?
I'm running docker desktop community v 2.2.0.5 stable.
Api code:
import (
"log"
"net/http"
conf "server/conf"
"github.com/gorilla/mux"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql"
)
var db *gorm.DB
// Middleware used to add correct permissions for all requests
func middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
next.ServeHTTP(w, r)
})
}
func StartApi(config conf.Config, _db *gorm.DB) {
log.Println("Start Api")
db = _db
router := mux.NewRouter()
// Create
router.HandleFunc("/login", login).Methods("POST", "OPTIONS")
router.HandleFunc("/network", createNetwork).Methods("POST", "OPTIONS")
// Read
router.HandleFunc("/users/{networkName}", getUsers).Methods("GET", "OPTIONS")
router.HandleFunc("/streams/{networkName}", getStreams).Methods("GET", "OPTIONS")
router.HandleFunc("/status", showStatus).Methods("GET", "OPTIONS")
log.Println("API started at port " + config.Backend.Api_port) // api port is stored in config
log.Println(http.ListenAndServe(":"+config.Backend.Api_port, middleware(router)))
log.Println("Api done!") }
Dockerfile:
# Start from the latest golang base image
FROM golang:latest
WORKDIR /go/src/server
COPY ./src/server .
# Download packages
RUN go get .
# Compile code
#RUN go build -o server
RUN go install server
# Expose ports
EXPOSE 8500
EXPOSE 8600
#ENTRYPOINT ./server
CMD ["go", "run", "server"]
Docker-compose:
version : '3'
services:
react:
build: ./ReactFrontend/
container_name: ReactFrontend
tty: true
ports:
- 4000:3000
backend:
network_mode: host
build: ./GoBackend/
container_name: goserver
ports:
- 8500:8500
- 8600:8600
Docker command:
docker run --net=host goserver
Any help is appreciated,
Cheers!

So i solved it for now by:
As mentioned here:
https://docs.docker.com/compose/networking/
Add to docker-compose:
networks:
default:
external:
name: database_default
As mentioned here: Access to mysql container from other container
I can connect to database as < db_container_name >:3306 in backend.
To automate the process I created a .sh script for handling an extra setup config step on container start. However, due to my structure in my config.yml it was hard to update with just "sed" commands so I created a python program for updating all config data. The Dockerfile, docker-compose file, setup.sh and update_config.py file is shown below.
setup.sh:
#!/bin/bash
# Don't remove this!
# This file is used by dockerfile to replace configs
# Replace config on run
python3 update_config.py
# Start program
go run server
Dockerfile:
# Start from the latest golang base image
FROM golang:latest
WORKDIR /go/src/server
COPY ./src/server .
# Install python3 and yml compability
RUN apt-get update && apt-get install -y python3-pip
RUN python3 --version
RUN pip3 install PyYAML
# Download packages
RUN go get .
# Compile code
#RUN go build -o server
RUN go install server
# Expose ports
EXPOSE 8500
EXPOSE 8600
# ENV
ENV DB_HOST "mysql:3306"
#CHMOD setup
RUN chmod +x setup.sh
CMD ["./setup.sh"]
Docker-compose
version : '3'
services:
react:
build: ./ReactFrontend/
container_name: ReactFrontend
tty: true
ports:
- 4000:3000
backend:
build: ./GoBackend/
container_name: GoBackend
environment:
DB_HOST: mysql:3306 # Name or IP of DB container!
ports:
- 8500:8500
- 8600:8600
networks:
default:
external:
name: database_default
update_config.py:
import yaml
import os
"""
DONT REMOVE
This file is used in the dockerfile!
"""
fname = "/go/src/server/config.yml"
stream = open(fname, 'r')
data = yaml.safe_load(stream)
stream.close()
# Add more updates here!
if os.environ.get('DB_HOST') is not None:
data["database"]["ip"] = os.environ['DB_HOST']
# Updated data print
print("Updated Data", data)
# Write changes to config
stream = open(fname, 'w')
yaml.dump(data, stream)
stream.close()
Example docker command that now works if we only want container to run:
docker run -p 8500:8500 -p 8600:8600 --net database_default goserver
It is working fine! And we are avoiding using unnecessary host network mode!

Related

flask app won't load in browser after executing docker-compose up

I am trying to learn to deploy apps using docker. I have a flask app I am trying to deploy. Each time I run docker-compose up, the output doesn't show any errors but the app won't load in the browser. I make use of flask-sqlalchemy to connect to a mysql database. The contents of my docker-compose file are as follows:
version: "3.7"
services:
db:
image: mysql:5.7
ports:
- "32000:3306"
environment:
MYSQL_HOST: db
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: data_scientist
volumes:
- ./db/route_client.sql:/docker-entrypoint-initdb.d/route_client.sql:ro
app:
build: C:\abdul_files\flask_apps\leaflet\control_search
links:
- db
ports:
- "5001:5000"
depends_on:
- db
The contents of my Dockerfile are as follows:
# Use an official Python runtime as an image
FROM python:2.7.9
# The EXPOSE instruction indicates the ports on which a container
# will listen for connections
# Since Flask apps listen to port 5000 by default, we expose it
EXPOSE 5000
# Sets the working directory for following COPY and CMD instructions
# Notice we haven’t created a directory by this name - this instruction
# creates a directory with this name if it doesn’t exist
WORKDIR /control_search
# Install any needed packages specified in requirements.txt
COPY requirements.txt /control_search
RUN pip install -r requirements.txt
# Run app.py when the container launches
COPY . /control_search
CMD python control_search.py
The contents of my config file where I have my sqlalchemy database uri are as follows:
import os
class Config(object):
SQLALCHEMY_DATABASE_URI = 'mysql://root:data_scientist#db/route_client'
SECRET_KEY = os.urandom(32)
SESSION_TYPE = 'filesystem'
As I said earlier, the docker-compose up command executes without errors but the app won't load in the browser. Any pointers to what I am doing wrong will be greatly appreciated.

unable to access docker container port from localhost when using docker-compose

I have a very basic node/express app with a dockerfile and a docker-compose file. When I run the docker container using
docker run -p 3000:3000 service:0.0.1 npm run dev
I can go to localhost:3000 and see my service. However, when I do:
docker-compose run server npm run dev
I can't see anything on localhost:3000, below are my files:
Dockerfile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
docker-compose.yml
version: "3.7"
services:
server:
build: .
ports:
- "3000:3000"
image: service:0.0.1
environment:
- LOGLEVEL=debug
depends_on:
- db
db:
container_name: "website_service__db"
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=website_service
also, everything is working fine from the terminal/docker side - no errors and services are running fine, i just cant access the node endpoints
tl;dr
docker-compose run --service-ports server npm run dev
// the part that changed is the new '--service-ports' argument
the issue was a missing docker-compose run argument --service-ports:
from these docs:
The second difference is that the docker-compose run command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag:

How to deploy tpm2-tools in a docker container

I'm trying to deploy a TPM service inside a docker container following the instructions detailed in https://github.com/vchatterji/tpm2-luks (this process perfectly worked for me on a virtual machine with Ubuntu 18.04 installed).
Everything works well until TPM-ABRMD service is required to be enabled, thus needing to execute udevadm and systemctl commands in order to restart udev rules and enable+activate the service.
Neither of these commands are available from inside my docker container by default, and although I figured out a workaround for being able to use udevadm and manually set the symlinks associated with systemctl enable, it wasn't enough to start ABRMD service.
Is there something I'm missing? I've tried many other possibilities and none has worked.
I'm using docker-compose, and the container in which I'd like to install this service has this docker-compose.yml file associated:
version: '3'
services:
api:
build:
context: dockerfiles
image: api
container_name: api
restart: unless-stopped
ports:
- ${API_EXTERNAL_PORT}:8080
depends_on:
- sql
environment:
HOST_DB: sql
PORT_DB: 3306
NAME_DB: ${MYSQL_DATABASE}
PASSWORD_DB: ${MYSQL_PASSWORD}
USER_DB: ${MYSQL_USER}
And the following Dockerfile (inside /dockerfiles):
FROM ubuntu:18.04
WORKDIR /app
RUN apt-get update && apt-get install -y openjdk-8-jdk
ENV JAVA_OPTS ""
ENTRYPOINT ["sh", "-c"]
CMD ["java $JAVA_OPTS -Djava.secutiry.egd=file:/dev/./urandom -jar /app/app.war"]
Thanks in advance.

How to fill elasticsearch database before starting webapp with docker-compose?

I am trying to make a Dockerfile and docker-compose.yml for a webapp that uses elasticsearch. I have connected elasticsearch to the webapp and exposed it to host. However, before the webapp runs I need to create elasticsearch indices and fill them. I have 2 scripts to do this, data_scripts/createElasticIndex.js and data_scripts/parseGenesToElastic.js. I tried adding these to the Dockerfile with
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD ["npm", "start"]
but after I run docker-compose up there are no indexes made. How can I fill elasticsearch before running the webapp?
Dockerfile:
FROM node:11.9.0
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY package*.json ./
# Install any needed packages specified in requirements.txt
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
#
RUN npm build
RUN npm i natives
# Bundle app source
COPY . .
# Make port 80 available to the world outside this container
EXPOSE 80
# Run app.py when the container launches
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD [ "node", "servers/PredictionServer.js"]
CMD [ "node", "--max-old-space-size=8192", "servers/PWAServerInMem.js"]
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: webapp
ports:
- "1337:1337"
- "4000:85"
depends_on:
- redis
- elasticsearch
networks:
- redis
- elasticsearch
volumes:
- "/data:/data"
environment:
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- ELASTICSEARCH_URL=http://elasticsearch:9200"
- ELASTICSEARCH_HOST=elasticsearch
redis:
image: redis
networks:
- redis
ports:
- "6379:6379"
expose:
- "6379"
elasticsearch:
image: elasticsearch:2.4
ports:
- 9200:9200
- 9300:9300
expose:
- "9200"
- "9300"
networks:
- elasticsearch
networks:
redis:
driver: bridge
elasticsearch:
driver: bridge
A Docker container only ever runs one command. When your Dockerfile has multiple CMD lines, only the last one has any effect, and the rest are ignored. (ENTRYPOINT here is just a different way to provide the single command; if you specify both ENTRYPOINT and CMD then the entrypoint becomes the main process and the command is passed as arguments to it.)
Given the example you show, I'd run this in three steps:
Start only the database
docker-compose up -d elasticsearch
Run the "seed" jobs. For simplicity I'd probably run them locally
ELASTICSEARCH_URL=http://localhost:9200 node data_scripts/createElasticIndex.js
(using your physical host's name from the point of view of a script running directly on the physical host, and the published port from the container) but if you prefer you can also run them via the Docker setup
docker-compose run web data_scripts/createElasticIndex.js
Once the database is set up, start your whole application
docker-compose up -d
This will leave the running Elasticsearch unaffected, and start the other containers.
An alternate pattern, if you're confident you want to run these "seed" or migration jobs on every single container start, is to write an entrypoint script. The basic pattern here is to start your server via CMD as you have it now, but to write a script that does first-time setup, ending in exec "$#" to run the command, and make that your container's ENTRYPOINT. This could look like
#!/bin/sh
# I am entrypoint.sh
# Stop immediately if any of these scripts fail
set -e
# Run the migration/seed jobs
node data_scripts/createElasticIndex.js
node data_scripts/parseGenesToElastic.js
# Run the CMD / `docker run ...` command
exec "$#"
# I am Dockerfile
FROM node:11.9.0
...
COPY entrypoint.sh ./ # if not already copied
RUN chmod +x entrypoint.sh # if not already executable
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["npm", "start"]
Since the entrypoint script really is just a shell script, you can use arbitrary logic for this, for instance only running the seed job based on the command, if [ "$1" == npm ]; then ... fi but not for debugging shells (docker run --rm -it myimage bash).
Your Dockerfile also looks like you might be trying to start three different servers (PredictionServer.js, PWAServerInMem.js, and whatever npm start starts); you can run these in three separate containers from the same image and specify the command: in each docker-compose.yml block.
Your docker-compose.yml will be simpler if you remove the networks: (unless it's vital to you that your Elasticsearch and Redis can't talk to each other; it usually isn't) and the expose: declarations (which do nothing, especially in the presence of ports:).
I faced the same issue, and I started my journey using the same approach posted here.
I was redesigning some queries that required me frequently index settings and properties mapping changes, plus changes in the dataset that I was using as an example.
I searched for a docker image that I could easily add to my docker-compose file to allow me to change anything in either the index settings or in the dataset example. Then, I could simply run docker-compose up, and I'd see the changes in my local kibana.
I found nothing, and I ended up creating one on my own. So I'm sharing here because it could be an answer, plus I really hope to help someone else with the same issue.
You can use it as follow:
elasticsearch-seed:
container_name: elasticsearch-seed
image: richardsilveira/elasticsearch-seed
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- INDEX_NAME=my-index
volumes:
- ./my-custom-index-settings.json:/seed/index-settings.json
- ./my-custom-index-bulk-payload.json:/seed/index-bulk-payload.json
You can simply point your index settings file - which should have both index settings + type mappings as usual and point your bulk payload file that should contain your example data.
More instruction at elasticsearch-seed github repository
We could even use it in our E2E and Integrations tests scenarios running in our CI pipelines.

How do I configure docker compose to expose ports correctly?

I'm using docker and docker compose to run a clojure and a node app, alongside postgres.
The project is contained in the following folder structure.
project/
-- app/
-- -- Dockerfile
-- frontend/
-- -- /Dockerfile
-- docker-compose.yml
The app/Dockerfile looks like so...
FROM clojure:latest
COPY . /usr/src/app
WORKDIR /usr/src/app
EXPOSE 9000
CMD ["lein", "run", "migrate", "&&","lein", "run"]
The frontend/Dockerfile looks like so ...
FROM node:5
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN npm install
EXPOSE 8080
CMD ["npm", "start"]
And lastly the docker-compose.yml looks like...
frontend:
image: bradcypert/node
volumes:
- ./frontend:/usr/src/frontend
ports:
- "8080:8080"
backend:
image: bradcypert/clojure
volumes:
- ./app:/usr/src/backend
ports:
- "9000:9000"
links:
- postgres
postgres:
image: postgres
ports:
- "5432:5432"
backend is failing for a separate reason, but the frontend seems to be running successfully, that being said, I'm unable to hit localhost:8080 and see the app. What do I need to do make this happen?
Thanks in advance.
Just to clarify, the command being run is docker-compose up
With boot2docker (on Mac or Windows), to access any port from localhost, you have to configure your VirtualBox VM in order to port-forward that port from the VM into the host.
Your port mappings are correct, but you still need to make visible to your host (Mac) the one port you want to access from localhost (your Mac).
See for instance "Using boot2docker to run Docker on a Mac or Windows" from Andrew Odewahn:
That way, you don't have to find out what the IP of your machine is.
(Which you can see with docker-machine ls followed by docker-machine ip <name>)

Resources