I'm a newbie to Docker and am running into problems running a Docker hosted app with multiple containers, that I inherited from another developer. This docker setup is working fine on a cloud server, however, when I try to install and run it locally then I'm getting errors.
When I run the "docker ps" command on the cloud server, here's what I'm getting:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1fbb8968c89 app_eserver "nginx -g 'daemon of…" 3 weeks ago Up 34 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp server
85be26cfd761 auction "/usr/local/bin/uwsg…" 2 years ago Up 34 hours 8092/tcp auction-backend
6d2c1ad52ef0 redis:4-alpine "docker-entrypoint.s…" 2 years ago Up 3 weeks 6379/tcp redis
94417f94d374 postgres:10.0-alpine "docker-entrypoint.s…" 2 years ago Up 3 weeks 5432/tcp db
As I understand, the above means that there are 4 containers running the app. Within the directory structure, there is a docker-compose.yml file and 2 Dockerfiles (one in the nginx folder and the other in the folder with the code). There are no Dockerfiles for redis and postgres.
When I try to build using the docker-compose.yml file then I get the following error:
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling backend (auction:)...
ERROR: pull access denied for auction, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
When I try to build the app Dockerfile, it throws a build error.
When I try to build the nginx Dockerfile, it builds but quits the container immediately after running.
I have read a whole bunch on the topic, but I'm unable to figure out how to run this locally on my machine. Any pointers would be really appreciated.
Here's my docker-compose.yml file:
eserver:
container_name: server
build: nginx
restart: always
ports:
- "80:80"
- "443:443"
links:
- backend
volumes_from:
- backend
backend:
container_name: auction-backend
image: auction
hostname: auction-backend
restart: always
env_file: .env
external_links:
- db
- redis
volumes:
- appmedia:/app/auction/media
Here's the nginx Dockerfile:
FROM nginx:1.15-alpine
RUN rm /etc/nginx/conf.d/*
COPY auction.conf /etc/nginx/conf.d/
Here's the app Dockerfile:
FROM python:2.7-alpine
RUN apk update && apk add \
postgresql-dev \
python3-dev \
gcc \
jpeg-dev \
zlib-dev \
musl-dev \
linux-headers && \
mkdir app
WORKDIR /app/
# Ensure that Python outputs everything that's printed inside
# the application rather than buffering it.
ENV PYTHONUNBUFFERED 1
ADD . /app
RUN if [ -s requirements.txt ]; then pip install -r requirements.txt; fi
EXPOSE 8092
VOLUME /app/auction/assets
ENTRYPOINT ["/usr/local/bin/uwsgi", "--ini", "/app/uwsgi.ini"]
Here's my directory structure:
├── app
│ ├── docker-compose.yml
│ └── nginx
│ ├── Dockerfile
│ └── auction.conf
├── auction-master
│ ├── Dockerfile
│ ├── LICENSE.txt
│ ├── README.rst
│ ├── auction
│ │ ├── accounts
│ │ ├── auction
│ │ ├── common
│ │ ├── django_messages
│ │ ├── employer
│ │ ├── log_activity
│ │ ├── manage.py
│ │ ├── notifications
│ │ ├── provider
│ │ ├── static
│ │ ├── templates
│ │ ├── admin
│ │ └── unidecode
│ ├── requirements
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── test.txt
│ ├── requirements.txt
│ └── uwsgi.ini
According to your error message, it's possible that your Dockerfile is trying to pull a private image.
The reason everything works normally on the cloud but local can not be that docker on the cloud has been logged into an account that has access to the image above.
Now you just need to docker login to the account that has been logged in the cloud, everything can be up and running again.
I was able to run it locally simply by downloading the containers themselves.
Related
I'm running with an issue where I can't build Dockerfile that includes multiple proto files(server and text). The server proto is within the Dockerfile dir, but the text proto is within the Dockerfile parent. So I'm building the Dockerfile in the parent dir to COPY the text proto to the Docker build.
The Docker build complaining about proto/text.proto: File not found. even though I COPY the proto/text.proto to the exact location as server/proto/server.proto.
Here are all my files:
DockerFile
FROM --platform=linux/x86_64 golang:1.19.3-bullseye
# Install grpc
RUN go install google.golang.org/grpc/cmd/protoc-gen-go-grpc#v1.2 && \
go install google.golang.org/protobuf/cmd/protoc-gen-go#v1.28
WORKDIR /app
COPY server/. /app
COPY proto/text.proto /app/proto/text.proto
# Install protoc and zip system library
RUN apt-get update && apt-get install -y zip && \
mkdir /opt/protoc && cd /opt/protoc && wget https://github.com/protocolbuffers/protobuf/releases/download/v3.7.0/protoc-3.7.0-linux-x86_64.zip && \
unzip protoc-3.7.0-linux-x86_64.zip
# Copy the grpc proto file and generate the go module
RUN /opt/protoc/bin/protoc --go_out=/app/proto --proto_path=/app/proto --go_opt=paths=source_relative --go-grpc_out=/app/proto --go-grpc_opt=paths=source_relative /app/proto/text.proto /app/proto/server.proto
EXPOSE 5051
RUN go build -o /server
ENTRYPOINT ["/server"]
Dir Tree
1.text
├── admin
│ ├── Dockerfile
│ ├── app.js
│ ├── package.json
│ └── web
│ ├── html
│ │ └── index.html
│ └── resources
├── compose.yaml
├── db
│ ├── Dockerfile
│ ├── main.go
│ ├── proto
│ │ ├── db.pb.go
│ │ ├── db.proto
│ │ └── db_grpc.pb.go
│ └── text.db
├── go.mod
├── go.sum
├── proto
│ ├── text.pb.go
│ └── text.proto
└── server
├── Dockerfile
├── main.go
├── proto
│ ├── server.pb.go
│ ├── server.proto
│ └── server_grpc.pb.go
└── text
├── text.go
└── text_test.go
I'm able to run the following protoc in the root text dir:
protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative proto/text.proto db/proto/db.proto server/proto/server.proto
And run the server locally, but I'm not able to build my Docker:
CMD
docker build -f server/Dockerfile -t server .
Error
=> ERROR [7/8] RUN /opt/protoc/bin/protoc --go_out=/app/proto --proto_path=/app/proto --go_opt=paths=source_relative --go-grpc_out=/app/proto --go-grpc_opt=paths=source_relative 0.4s
------
> [7/8] RUN /opt/protoc/bin/protoc --go_out=/app/proto --proto_path=/app/proto --go_opt=paths=source_relative --go-grpc_out=/app/proto --go-grpc_opt=paths=source_relative /app/proto/text.proto /app/proto/server.proto:
#11 0.427 proto/text.proto: File not found.
#11 0.429 server.proto: Import "proto/text.proto" was not found or had errors.
#11 0.431 server.proto:25:5: "text.Status" seems to be defined in "text.proto", which is not imported by "server.proto". To use it here, please add the necessary import.
------
executor failed running [/bin/sh -c /opt/protoc/bin/protoc --go_out=/app/pro
text/server/proto
syntax="proto3";
package server;
import "proto/text.proto";
option go_package = "github.com/amb1s1/text/server/proto/server";
message SendMessageRequest {
string token = 1;
string phone = 2;
string message = 3;
bool dry_run = 4;
};
message SendMessageResponse {
text.Status status = 1;
};
service Text {
// SendMessage sents SMS message.
rpc SendMessage(SendMessageRequest) returns (SendMessageResponse) {}
}
text/proto/
syntax="proto3";
package text;
option go_package = "github.com/amb1s1/text/proto";
enum Status {
UNKNOW = 0;
OK = 1;
TOKENS_EXISTS = 2;
TOKEN_NOT_FOUND = 3;
FAILED_NOT_SENT= 4;
DRY_RUN_OK = 5;
ZERO_BALANCE = 6;
WRONG_TOKEN = 7;
}
As per the comments; within your docker image you have the directory structure:
/app/proto/server.proto
/app/proto/text.proto
server.proto imports text.proto with import "proto/text.proto".
This means that protoc will be looking for a file called proto/text.proto within the import path. You specified --proto_path=/app/proto as an argument to protoc meaning that protoc will check for /app/proto/proto/text.proto which does not exist (hence the issue). To fix this remove the --proto_path=/app/proto (so protoc uses the working folder) or specify --proto_path=/app.
I have the following directory structure:
.
├── README.md
├── alice
├── docker
│ ├── compose-prod.yml
│ ├── compose-stage.yml
│ ├── compose.yml
│ └── dockerfiles
├── gauntlet
├── nexus
│ ├── Procfile
│ ├── README.md
│ ├── VERSION.txt
│ ├── alembic
│ ├── alembic.ini
│ ├── app
│ ├── poetry.lock
│ ├── pyproject.toml
│ └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.
I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.
I have an Ansible repo with the following directory path
.
├── kube-init.sh
├── provisioning
│ ├── group_vars
│ │ └── all.yml
│ ├── roles
│ │ └── app_deploy
│ │ ├── files
│ │ │ ├── secretfile
│ │ ├── tasks
│ │ │ ├── docker
│ │ │ │ ├── Dockerfile
│ │ │ └── main.yml
│ │ └── vars
│ │ └── main.yaml
│ └── site.yml
├── README.md
I'm trying to define a docker build image task at provisioning/roles/app_deploy/tasks/main.yaml as follows
- name: Build image and with build args
vars:
ansible_python_interpreter: /usr/bin/python3
docker_image:
name: app-name
build:
path: docker
dockerfile: Dockerfile
args:
log_volume: /var/log/svm
listen_port: 8080
state: present
source: build
I can't quite get the Dockerfile/context to be made available to the ansible task. Played around with various combinations of relative/absolute values of path and dockerfile combinations.
I thought the most obvious choice would be to skip using dockerfile and just use path as ./docker which didn't work to my surprise.
Using Ansible with Python3 with Docker SDK above v5
A bit more context. I'm actually using Vagrant with Openstack plugin to provision the new compute with my Ansible tasks. The plugin copies the contents of the repo (Ansible) on to the target machine at the path /home/vagrant, and runs the provisioning scripts from there.
I figured out the right path to pass for the docker module the following way. I added a debug task below to figure out the current working directory on which my app_deploy/tasks/main.yml runs from as
- name: Find out playbook's path
shell: pwd
register: playbook_path_output
- debug: var=playbook_path_output.stdout
which was returning me provisioning/roles. I realized that the path to be from where site.yml was located which defines one of the roles to be app_deploy.
So with that, I modified the docker module path as
- name: Build image and with build args
vars:
ansible_python_interpreter: /usr/bin/python3
docker_image:
name: app-name
build:
path: ./roles/app_deploy/tasks/docker/
args:
log_volume: /var/log/svm
listen_port: 8080
state: present
source: build
But per Zeitounator's comment, the above might not be a right practice to store the Dockerfile and its constituent files inside a particular role directory.
I have a project including multiple Dockerfiles.
The tree is like,
.
├── app1
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── app2
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── common
│ └── my_lib.py
└── deploy.sh
To build my application, common/ is necessary and we have to COPY it inside Dockerfile.
However, Dockerfile cannot afford to COPY files from its parent directory.
To be precise, it is possible if we run docker build with -f option in the project root.
But I would not like to do this because the build context will be unnecessarily large.
When building app1, I don't like to include app2/huge_modules/ in the build context (the same as when building app2).
So, I prepare a build script in each app directory.
Like this.
cd $(dirname $0)
cp ../common/* ./
docker build -t app1 .
But this solution seems ugly to me.
Is there a good solution for this case?
Build a base image containing your common library, and then build your two app images on top of that. You'll probably end up restructuring things slightly to provide a Dockerfile for your common files:
.
├── app1
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── app2
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── base
| ├── Dockerfile
| └── common
│ └── my_lib.py
└── deploy.sh
You start by building a base image:
docker build -t mybaseimage base/
And then your Dockerfile for app1 and app2 would start with:
FROM mybaseimage
One possible solution is to start the build process from the top directory, with the -f flag you mentioned, dynamically generating the .dockerignore file.
That is, lets say that you currently build app1. Then you would first create in the top directory a .dockerignore file with the content: app2, then run the build process. After finishing the build, remove the .dockerignore file.
Now you want to build app2? No problem! Similarly generate first dynamically a .dockerignore file with the content app1, build and remove the file. Voila!
I need to compile a Golang application for Linux and I can't cross-compile under Mac, because of another library. So I decided to compile within a Docker container. This is my first time to use Docker.
This is my current directory structure:
.
├── Dockerfile
├── Gopkg.lock
├── Gopkg.toml
├── Vagrantfile
├── bootstrap.sh
├── src
│ ├── cmd
│ │ ├── build.bat
│ │ ├── build.sh
│ │ ├── config.json
│ │ ├── readme.md
│ │ └── server.go
│ ├── consumers.go
│ ├── endpoints
│ │ ├── json.go
│ │ ├── rate.go
│ │ ├── test_payment.go
│ │ └── wallet.go
│ ├── middleware
│ │ └── acl.go
│ ├── models.go
│ ├── network
│ │ └── network.go
│ ├── qr
│ │ └── qr.go
│ ├── router
│ │ └── router.go
│ ├── service
│ │ └── walletService.go
│ ├── services.go
│ ├── setup.sql
│ ├── store
│ │ └── wallet.go
│ ├── stores.go
│ └── wallet
│ ├── coin.go
│ └── ethereum.go
Dockerfile:
FROM golang:latest
WORKDIR /src/cmd
RUN ls
RUN go get github.com/go-sql-driver/mysql
RUN go build -o main ./src/cmd/server.go
CMD ["./main"]
I try to build the Docker image with:
docker build -t outyet .
This is the error it returns:
Sending build context to Docker daemon 5.505MB
Step 1/6 : FROM golang:latest
---> d0e7a411e3da
Step 2/6 : WORKDIR /src/cmd
---> Using cache
---> 0c4c2b99e294
Step 3/6 : run ls
---> Using cache
---> 23d3e491a2e1
Step 4/6 : RUN go get github.com/go-sql-driver/mysql
---> Running in f34447e51f6c
Removing intermediate container f34447e51f6c
---> 5731ab22ee43
Step 5/6 : RUN go build -o main server.go
---> Running in ecc48fcf5488
stat server.go: no such file or directory
The command '/bin/sh -c go build -o main server.go' returned a non-zero code: 1
How i can build my Golang application with docker?
The error you're seeing is:
stat server.go: no such file or directory
The command '/bin/sh -c go build -o main server.go' returned a non-zero code: 1
... and in other words, it's telling you that the Go compiler can't find server.go which you're trying to build. Update your Dockerfile to copy your local files into the Docker image:
FROM golang:latest
COPY . /go/src/workdir
WORKDIR /go/src/workdir
RUN go build -o main ./src/cmd/server.go
CMD ["/go/src/workdir/main"]
In your directory I spotted Gopkg.toml which is a dependency manifest used by dep. dep will use a directory called vendor to include all dependencies for your Go project. Before you build the Docker image, you need to ensure all dependencies are present with dep ensure:
$ dep ensure
$ docker build -t outyet .
You need to add your sources in your build container. Add this line in your Dockerfile (after FROM golang:latest for example) :
ADD src/ /src/cmd
Then, you could access files inside the container (/src/cmd) : RUN ls should now return something.