Using proxy on docker-compose in server - docker

When i run sudo docker-compose build i get
Building web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html><body><h1>403 Forbidden</h1>\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n</body></html>\n\n"
I need to set proxy for docker-compose for build
things i have tried:
looking at https://docs.docker.com/network/proxy/#configure-the-docker-client
i have tried setting ~/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}
tried with --env argument
tried setting proxy variables on the server with no result
i also have tried this link
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
but i get this on version: '3.6'
Unsupported config option for services.web: 'args'
these settings seem to be set on docker and not docker-compose
i also don't need to set any proxy on my local device (i don't want to loose portability if possible)
docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a

You must be from restricted countries which are banned by docker (from 403 status code). only way is to use proxies in your docker service.
[Service]
...
Environment="HTTP_PROXY=http://proxy.example.com:80/
HTTPS_PROXY=http://proxy.example.com:80/"
...
after that you should issue:
$ systemctl daemon-reload
$ systemctl restart docker

Include proxy details for each service in docker-compose.yml file, the sample configuration looks as below mentioned. Restart the docker and then run "docker-compose build" again. You might also run "docker-compose ps" to see if all the services mentioned in the compose file running successfully.
services:
<service_name>:
image:
hostname:
container_name:
ports:
environment:
HTTP_PROXY: 'http://host:port'
HTTPS_PROXY: 'http://host:port'
NO_PROXY: 'localhost, *.test.lan'

1: edit resolve.conf in linux, add ip to the top of line in resolv.conf
nameserver {type ip}
2: use a poxy and create an account in docker hub (https://hub.docker.com/)
3:login into docker
sudo docker login
user:
password:
4: if you have problem try step 3 again

You need to make an env file which you put proxy settings in
/usr/local/etc/myproxy.env
HTTP_PROXY=http://proxy.mydomain.net:3128
HTTPS_PROXY=http://proxy.mydomain.net:3128
Then run docker-compose with something like:
docker-compose -f /opt/docker-compose.yml --env-file /usr/local/etc/myproxy.env up

Related

Docker Run yields different result than docker-compose

I have a docker compose file with an image that runs an npm install.
services:
test:
image: company.com/myImage:1.0.2
environment:
- HTTP_PROXY=http://proxy.com:8080
- HTTPS_PROXY=http://proxy.com:8080
Running docker-compose -f ./docker/docker.build.yaml up fails during the install with some type of dns issue
npm verb stack FetchError: request to https://company.com/artifactory/api/npm/npm-remote/lodash.merge failed, reason: getaddrinfo EAI_AGAIN company.com
however,
Running docker run company.com/myImage:1.0.2 works
npm http fetch GET 200 https://company.com/artifactory/api/npm/npm-remote/lodash.merge/-/lodash.merge-4.6.2.tgz 95ms
My company uses a proxy to connect to the internet so my local environment variables contain some proxy env vars. I tried hardcoding those env vars into the docker compose file but the result stayed the same.
What am I missing?
edit: added env vars I tested with to compose file
docker cli by default uses the docker host network whereas docker-compose was creating it's own docker_default network. My solution lied in specifying the docker network in the compose file
services:
test:
image: whatever
network_mode: "host"

docker-compose cannot resolve DNS

The problem is that docker compose cannot build image, failing on RUN npm ci. But after hours of debugging, I isolated the problem and pinned it in this minimal setup:
My docker-compose.yml
version: '3.8'
services:
myapp:
build:
dockerfile: Dockerfile
context: .
target: development
command: sleep Infinity
My Dockerfile
FROM node:18-alpine AS development
RUN ping google.com
When I run docker compose -f docker-compose.yml up -d --build
I'm getting error:
What I tried so far
In Dockerfile replace ping google.com to ping <real-ip>. ✅ And it works, so I assume it's DNS problem.
Add dns into docker-compose.yml: dns: 8.8.8.8. ❌ No luck
Run under super user sudo docker compose …. ❌ No luck
I tried to build image from Dockerfile without compose, using just docker build command. ✅ And it works, so the problem with docker compose.
Commented RUN ping … command, so it does not fail and runs sleep Infinity form the compose config. Then I connected into the container via docker exec -it <container> sh and was able to ping google and run npm ci. So when container is running it has access to DNS. The problem happens only in docker compose on the build stage from Dockerfile.
Environment
It's a VPS on hetzner. I ssh under a user with sudo and docker group.

In docker-compose, why one service could reach another, but not the other way around?

I'm writing an automated test that involves running several containers at once. The test submits some workload to the tested service, and expects a callback from it after a time.
To run the whole system, I use docker compose run with the following docker-compose file:
version: "3.9"
services:
service:
build: ...
ports: ...
tester:
image: alpine
depends_on:
- service
profiles:
- testing
The problem is, I can see "service" from "tester", but not the other way around, so the callback from the service could not land to "tester":
$ docker compose -f .docker/docker-compose.yaml run --rm tester \
nslookup service
Name: service
Address 1: ...
$ docker compose -f .docker/docker-compose.yaml run --rm service \
nslookup tester
** server can't find tester: NXDOMAIN
I tried specifying the same network for them, and giving them "links", but the result is the same.
It seems like a very basic issue, so perhaps I'm missing something?
When you docker-compose run some-container, it starts a temporary container based on that description plus the things it depends_on:. So, when you docker-compose run service ..., it doesn't depends_on: anything, and Compose only starts the temporary container, which is why the tester container doesn't exist at that point.
If you need the whole stack up to make connections both ways between containers, you need to run docker-compose up -d. You can still docker-compose run temporary containers on top of these.

Development workflow for server and client using Docker Compose?

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

Difference between docker-compose and manual commands

What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting

Resources