Go mod fails on Raspberry Pi + Docker Compose - docker

When I've two microservices written in Go, each of them with their respective Dockerfile which does this
# Build
FROM golang:alpine AS build
# Destination of copy
WORKDIR /build
# Download dependencies
COPY go.mod ./
COPY go.sum ./
RUN go mod download
# Copy source code
COPY . ./
# Build
RUN go build -o bin ./cmd/main.go
# Deploy
FROM alpine
RUN adduser -S -D -H -h /app appuser
USER appuser
COPY --from=build /build/bin /app/
WORKDIR /app
EXPOSE 8080
CMD ["./bin"]
If I run docker build on them everything works fine so I made a compose.yaml file to run both microservices (and other stuff) which looks like this
services:
redis:
image: redis:alpine
ports:
- "6379:6379"
postgres:
image: postgres:alpine
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- ../loquegasto-backend/migrations/core.sql:/docker-entrypoint-initdb.d/1-core.sql
- ../loquegasto-backend/migrations/core.categories.sql:/docker-entrypoint-initdb.d/2-core.categories.sql
- ../loquegasto-backend/migrations/core.transactions.sql:/docker-entrypoint-initdb.d/3-core.transactions.sql
- ../loquegasto-backend/migrations/core.users.sql:/docker-entrypoint-initdb.d/4-core.users.sql
- ../loquegasto-backend/migrations/core.wallets.sql:/docker-entrypoint-initdb.d/5-core.wallets.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 3
lqg-backend:
build: ../loquegasto-backend
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- JWT_SECRET=${JWT_SECRET}
- PORT=8080
depends_on:
postgres:
condition: service_healthy
lqg-telegram:
build: ../loquegasto-telegram
links:
- "redis"
- "lqg-backend"
environment:
- JWT_SECRET=${JWT_SECRET}
- TELEGRAM_TOKEN=${TELEGRAM_TOKEN}
- BACKEND_URL=http://lqg-backend:8080
- EXPORTER_FILE_PATH=lqg-export
- REDIS_HOST=redis:6379
- PORT=8080
depends_on:
- redis
- lqg-backend
This runs perfect on MacOS using docker compose up --build -d but running on a Raspberry Pi 4 it always breaks when running go mod download, throwing the following message:
> [loquegasto-infra-lqg-backend build 5/7] RUN go mod download:
#0 1.623 go: github.com/Masterminds/squirrel#v1.5.1: Get "https://proxy.golang.org/github.com/%21masterminds/squirrel/#v/v1.5.1.mod": dial tcp: lookup proxy.golang.org on [2800:810:100:1:200:115:192:28]:53: dial udp [2800:810:100:1:200:115:192:28]:53: connect: cannot assign requested address
------
failed to solve: executor failed running [/bin/sh -c go mod download]: exit code: 1
Sometimes it breaks with only one dependency, sometimes with all of them, sometimes with one ms and sometimes with the other.
Any tips?
Thanks!

Well after some research found that my Raspberry had an empty DNS server so I set it to the Google's one (8.8.4.4 and 8.8.8.8) and worked perfectly.
Basically edited the file /etc/dhcpcd.conf, the line static domain_name_servers= to static domain_name_servers=8.8.4.4 8.8.8.8.
Thank you all!

Related

docker-compose: how to automatically propagate changes (both frontend and backend)?

In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!

I'm getting `ERR_EMPTY_RESPONSE` in Docker Compose even though the two individual containers work when run separately

So I have a basic frontend and backend. The backend relies on some environment variables and this is my docker-compose.yml.
version: "3.9"
services:
backend:
env_file:
- .env
build:
context: ./backend
container_name: fastapi-api
ports:
- 80:80
frontend:
build:
context: ./frontend
container_name: vue-ui
ports:
- 8080:8080
links:
- backend
This gives me ERR_EMPTY_RESPONSE when I go to http://127.0.0.1:8080/, however when I ran the individual Dockerfiles for my frontend and backend, this goes smoothly
My frontend
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'frontend' folder the current working directory
WORKDIR /frontend
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
My backend
FROM tiangolo/uvicorn-gunicorn:python3.8
LABEL maintainer="Sebastian Ramirez <tiangolo#gmail.com>"
WORKDIR /backend
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
EXPOSE 80
This is what I see from running docker ps
This is what's happening, frontend requests are being sent to the wrong place
I want it to go here
So requests should go to port 80 not port 8000
This is what I see from dev tools
However this is my code
axios
.post(`http://127.0.0.1:80/city/`, {
city_name: this.current_city
})
Where are the extra 0s coming from?
This is what happens when I ran the two containers separately
By looking at the docker ps output I would guess that you have by accident switched ports for backend and frontend in configuration. Frontend has unmapped port 80 and backend has unmapped port 8080.
Try this one:
version: "3.9"
services:
backend:
env_file:
- .env
build:
context: ./backend
container_name: fastapi-api
ports:
- 8080:8080
frontend:
build:
context: ./frontend
container_name: vue-ui
ports:
- 80:80
links:
- backend

Executable not found in docker compose /bin/sh: 1: main: not found

I am trying to run my restful api in docker but having issue with my golang executable it is always not found. Here is my Dockerfile
# Start from golang base image
FROM golang:1.15.2
#Set ENV
ENV DB_HOST=fullstack-mysql \
DB_DRIVER=mysql \
DB_USER=root \
DB_PASSWORD=root \
DB_NAME=link_aja \
DB_PORT=3306 \
APP_NAME=golang-linkaja \
CGO_ENABLED=0
# Copy the source from the current directory to the working Directory inside the container
COPY . /usr/src/${APP_NAME}
# Move to working directory
WORKDIR /usr/src/${APP_NAME}
#install depedencies
RUN go mod download
# Build the application
RUN go build -o ${APP_NAME}
# Expose port 3000 to the outside world
EXPOSE 3000
#Command to run the executable
CMD ${APP_NAME}
And here is my docker-compose.yml
version: '3'
services:
app:
container_name: golang-linkaja
build: .
ports:
- 3000:3000
restart: on-failure
volumes:
- api:/usr/src/${APP_NAME}
depends_on:
- fullstack-mysql
networks:
- fullstack
fullstack-mysql:
image: mysql:5.7
container_name: full_db_mysql
ports:
- 3306:3306
environment:
- MYSQL_ROOT_HOST=${DB_HOST}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
volumes:
- database_mysql:/var/lib/mysql
networks:
- fullstack
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin_container
depends_on:
- fullstack-mysql
environment:
- PMA_HOST=fullstack-mysql #DB_HOST env must be the same with this
- PMA_USER=${DB_USER}
- PMA_PORT=${DB_PORT}
- PMA_PASSWORD=${DB_PASSWORD}
ports:
- 9090:80
restart: always
networks:
- fullstack
volumes:
api:
database_mysql:
# Networks to be created to facilitate communication between containers
networks:
fullstack:
driver: bridge
Everything works correctly except for the Go app itself, here is the error that I get
golang-linkaja | /bin/sh: 1: golang-linkaja: not found
Could i get any help please? i'm new and still learning docker
Thanks in advance!
Update: here are the other things i've tried:
1.Changing CMD to CMD ["./usr/src/${APP_NAME}/${APP_NAME}"]
Return error golang-linkaja | sh: 1: /usr/src/golang-linkaja/golang-linkaja: not found
2.Changing to CMD [ "./golang-linkaja" ] and CMD [ "./${APP_NAME}" ]
Return error ERROR: for golang-linkaja Cannot start service app: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "./golang-linkaja": stat ./golang-linkaja: no such file or directory: unknown
You need to remove the volume - api:/usr/src/${APP_NAME} from your compose. You have already copied what you need in your Dockerfile. The volume(defined in compose) is overwriting all your data and hence your built binary is not found.
Just remove the volume and try to rebuild and start the container again .... and change your cmd to CMD [ "./${APP_NAME}" ]
In your Dockerfile, you try to run your executables without using ./ prefix, So OS search executable on system folders and can not find it. Add ./ beginning of your CMD or use the absolute path of the executable.
#Command to run the executable
CMD ./${APP_NAME}
or
#Command to run the executable
CMD /usr/src/${APP_NAME}

Cannot initiate RabbitMQ when dockerize my Flask - Celery - RabbitMQ application

I am new in Docker and I want to dockerize my Flask-Celery-RabbitMQ app which connects to a sqlite database.
This is my tree files:
--dockerized_app
--celery_app
--app
--__init__.py
--routes.py
--modules.py
--celery_endpoints.py
run.py
config.py
Dockerfile
requirements.txt
docker-compose.yml
my_db.db
If I want to run this app without Docker I usually do:
PATH=$PATH:/usr/local/sbin
rabbitmq-server
export FLASK_APP=run.py
flask run
celery -A run.celery worker --loglevel=info
Dockerfile
FROM python:3
ADD requirements.txt /app/requirements.txt
ADD ./celery_app/ /app/
WORKDIR /app/
RUN pip install -r requirements.txt
ENTRYPOINT celery -A celery_app worker --loglevel=info
docker-compose.yml
version: '3'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5673:5672"
worker:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
requeriments.txt
Flask==1.0.2
Flask-SQLAlchemy==2.3.2
SQLAlchemy==1.2.7
celery==4.2.1
I run docker-compose build and then I create the image. Celery starts well, but rabbitMQ doesnt and I get this error:
[2018-07-25 14:51:12,218: ERROR/MainProcess] consumer: Cannot connect
to amqp://guest:**#XXX.X.X.X:5672//: [Errno 111] Connection refused.
I think that the problem is that I am not declaring
PATH=$PATH:/usr/local/sbin
which is necessary to start RabbitMQ, but I dont know how to do it.
Also I dont know if Flask is starting well.

docker-compose running 2 services with dockerfiles, The task "phx.server" could not be found, main.go: no such file or directory

I have an issue running my docker-compose.yml file with 4 services. They are my go microservice, phoenix web server, mongodb and redis images.
I specified in both my phoenix and golang dockerfiles to change working directory before running both services. I currently get the following errors when I do docker-compose up.
The task "phx.server" could not be found
main.go: no such file or directory
Here is my Dockerfile.go.development:
# base image elixer to start with
FROM golang:latest
# create app folder
RUN mkdir /goApp
COPY ./genesys-api /goApp
WORKDIR /goApp/cmd/genesys-server
# install dependencies
RUN go get gopkg.in/redis.v2
RUN go get github.com/gorilla/handlers
RUN go get github.com/dgrijalva/jwt-go
RUN go get github.com/gorilla/context
RUN go get github.com/gorilla/mux
RUN go get gopkg.in/mgo.v2/bson
RUN go get github.com/graphql-go/graphql
# run phoenix in *dev* mode on port 8080
CMD go run main.go
Here is my Dockerfile.phoenix.development:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
RUN mkdir /app
COPY ./my_app /app
WORKDIR /app
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Here is my docker-compose.yml file:
version: '3.6'
services:
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- .:/goApp
depends_on:
- db
- redis
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- .:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
db:
container_name: db
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
For the error related to go microservice, Since the go binary is not found in PATH, you may need to set the GOPATH env variable via your docker file for go:
export GOPATH=

Resources