Redis Docker not linking with other docker containers - docker

I have two docker images. One is jobservice and another one is redis. I tried to link the redis container into my job service container by using link command.
The error is unable to find the docker image.
I removed the link command then it is working fine.
Two docker images
$ docker images ls
gcr.io/sighmo-development/jobservice 1.0.1 f0a1a4458f89 11 seconds ago 874MB
redis latest f7302e4ab3a8 2 weeks ago 98.2MB
Docker ps command
$ docker ps
848cf2992a34 redis "docker-entrypoint.s…" 8 hours ago Up 8 hours 6379/tcp some-redis
docker command to run jobservice
$ docker run -d \
--env-file /home/amareswaran_cloud/lookmyjobs-repo/LOOK_MY_JOBS/docker-env/env.list \
-v /home/amareswaran_cloud/lookmyjobs-volume/jobservice:/home/ssl --name=jobservice \
--link discovery:discovery \
--link sc_kafka:kafka \
--link scdb:scdb \
--link sc_redis:some-redis \
gcr.io/sighmo-development/jobservice:1.0.1
Expected is docker command should link with redis. But actual is docker image not found.

You have the container name and alias reversed. The container name should be first, and according to docker ps, your container is named some-redis:
--link some-redis:sc_redis

Seems you're running different containers, not arranged by a Compose file and I strongly suggest you to use it for a several number of reasons:
you can achieve IaC (Infrastructure as Code) and commit it in a human-readable form
you can highly reproduce it just with a single command (docker-compose up), along with tear down (docker-compose down)
you can use with ease Docker network in order to avoid the use of link feature that is deprecated
In the end, it looks I'm missing some useful information to translate your current deployment to a Compose-based reference (I'm referring to sc_kafka,scdb and sc_redis), so YMMV but it should work enough adding required services.
First of all, ensure you got installed docker-compose in your path and put the content of this file in your working directory (I suppose /home/amareswaran_cloud/lookmyjobs-repo).
version: '3.7'
services:
redis:
image: redis:latest
sc_kafka:
image: <KAFKA_IMAGE>
scredis:
image: <REDIS_IMAGE>
scdb:
image: <DB_IMAGE>
jobservice:
image: gcr.io/sighmo-development/jobservice:1.0.1
env_file:
- ./LOOK_MY_JOBS/docker-env/env.list
volumes:
- ./../lookmyjobs-volume/jobservice:/home/ssl
With this simple Compose, all containers are linked to each one, just need to use the {SERVICE_NAME} DNS name and there you go.
An additional feature could be to set up several networks in order to segregate services at its best but that's a next step you can achieve on your own later.

Related

What is the difference between docker run -p and ports in docker-compose.yml?

I would like to use a standard way of running my docker containers. I have have been keeping a docker_run.sh file, but docker-compose.yml looks like a better choice. This seems to work great until I try to access my website running in the container. The ports don't seem to be set up correctly.
Using the following docker_run.sh, I can access the website at localhost. I expected the following docker-compose.yml file to have the same results when I use the docker-compose run web command.
docker_run.sh
docker build -t web .
docker run -it -v /home/<user>/git/www:/var/www -p 80:80/tcp -p 443:443/tcp -p 3316:3306/tcp web
docker-compose.yml
version: '3'
services:
web:
image: web
build: .
ports:
- "80:80"
- "443:443"
- "3316:3306"
volumes:
- "../www:/var/www"
Further analysis
The ports are reported as the same in docker ps and docker-compose ps. Note: these were not up at the same time.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<id> web "/usr/local/scripts/…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:3307->3306/tcp <name>
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------
web /usr/local/scripts/start_s ... Up 0.0.0.0:3316->3306/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
What am I missing?
As #richyen suggests in a comment, you want docker-compose up instead of docker-compose run.
docker-compose run...
Runs a one-time command against a service.
That is, it's intended to run something like a debugging shell or a migration script, in the overall environment specified by the docker-compose.yml file, but not the standard command specified in the Dockerfile (or the override in the YAML file).
Critically to your question,
...docker-compose run [...] does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag.
Beyond that, the docker run command you show and the docker-compose.yml file should be essentially equivalent.
You don't run docker-compose.yamls the same way that you would run a local docker image that you have either installed or created on your machine. docker-compose files are typically launched running the command docker-compose up -d to run in detached mode. Then when you run docker ps you should see it running. You can also run docker-compose ps as you did above.

Don't create containers twice with docker-compose up and running docker run containers

I'd like docker-compose to use an already running container for imageA and not create it a second time when calling docker-compose up -d. The original container was run using docker run.
Steps:
I started a container with docker run, eg.
docker run --name imageA -d -p 5000:5000 imageA
I then call docker-compose up -d with a docker-compose.yml file that includes a service with the same name and image as the first container.
version: "3"
services:
imageA:
image: imageA
ports:
- "5000:5000"
imageB:
image: imageB
ports:
- "5001:5001"
What happens:
docker-compose tries to create imageA and fails when it tries to bind port 5000 since container imageA has it bound already.
Question: How can docker-compose "adopt" or "include" the first container without trying to create it a again?
I don't believe this is currently possible. If you compare the outputs of docker ps and docker-compose ps, you should notice that docker-compose ps does not show the imageA running, if it was started with docker run.
Docker-compose is only interested in the services that are defined in the docker-compose files, and it does not seem to use only the container names for that, but labels too, and you cannot add labels to running containers currently.
Other than that, the container started with docker run will also not be (at least by default) in the same internal network as those that are started with docker-compose.
So your best option would be either:
a) Removing the already running container from the compose-file.
b) Calling docker-compose up -d imageB to run only the individual service, so that the compose updates only that or
c) just stopping the already running container and starting it again with compose.
Docker containers should anyway be created in a way that it is easy and acceptable to just restart them when needed.
Adding --no-recreate flag will prevent recreation of the container, if it already exists.
Example:
docker-compose -f docker-compose-example.yaml up -d --no-recreate

Using same containers with multiple project on local host

Problem
A have few projects on my computer, that all are developed.
I started working with docker few month ago and I share some config between projects.
I have clear container list each time. Why? Because usually my process need to look like:
Delete all containers docker rm $(docker ps -a -q)
Delete all images - docker rmi $(docker images -q)
run docker-compose up -d
The problem is everywhere, I have defined composer (like in config below). When I switch, but don't delete images/containers, then i have, that composer/composer container exist and don't start.
Of course, I use more services like that, but this is simples one.
docker-compose.yml
version: '2'
services:
php:
image: php:7.1.3-alpine
volumes:
- ./:/app
working_dir: /app
composer:
image: composer/composer
volumes_from:
- php
working_dir: /app
My env
Mac OSX 10.11.6
Docker for mac: 17.03.1-ce-mac12 (17661)
Till now
searched texts like : Docker multi use of containers / docker multi projects on local with same container
read some blog with configuration, but didn't get hint.
Summary
As the topic is to wide, to many pages.
Maybe I ommited something in understand of concept or config.
Would be nice to get some explanation and hints for good manage docker-compose.yml files like that and what was wrong in my process.
Thanks.
Your question isn't very clear, but it sounds like you're using docker-compose to bring the containers up but relying on docker rm/docker rmi to take them down. Try doing everything with Docker Compose. Bring services up:
docker-compose up -d
Take services down but let volumes persist:
docker-compose down
Take services down and destroy volumes:
docker-compose down --volumes
https://docs.docker.com/compose/reference/down/
For the compose file you posted, you shouldn't generally need to use docker rm/docker rmi.

Docker crash test with many containers of the same image

I would like to make a docker crash test on my server, to see how many containers based on the same image my server will support. (Because I've installed jupyterhub and I want to see how many containers can run in good condition.)
So how can I copy an existing container?
No need to copy an existing container, just create new ones of the same image. For your purposes I would recommend using the scale feature of docker-compose.
docker-compose.yml:
web:
image: <someimage>
db:
image: <someotherimage>
Then simply specify the amount of containers you would like to start:
$ docker-compose up -d
$ docker-compose ps
$ docker-compose scale web=15 db=3
$ docker-compose ps

Using Docker-Compose to spin up multiple instances of a container with different configurations

I understand that you can user docker-compose with the scale command to spin up multiple containers. However, they will all have the same configuration used.
Is it possible to launch a container on the same host with different configurations (different .yml files) on the same host?
Using the following commands:
docker-compose -f dev.yml up -d
docker-compose -f qa.yml up -d
only the qa.yml container will be running, which is not what I want.
-- edit --
Here's what happens when I try running both commands.
$ docker-compose -f compose/dev.yml up -d
compose_mydocker_1 is up-to-date
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
905912df6e48 compose_mydocker "/sbin/my_init" 2 days ago Up 2 days 0.0.0.0:1234->80/tcp compose_mydocker_1
$ docker-compose -f compose/qa.yml up -d
Recreating compose_mydocker_1...
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fc912201224 compose_mydocker "/sbin/my_init" 5 seconds ago Up 5 seconds 0.0.0.0:1235->80/tcp compose_mydocker_1
My qa.yml and dev.yml look like this:
mydocker:
build: ..
ports:
- "1234:80" #for dev.yml
#- "1235:80" for qa.yml
environment:
- ENVIRONMENT=dev #and vice-versa for qa
volumes:
- ../assets/images:/var/www/assets
What you need to do is change the project name. By default, compose uses a project named based on the current directory. In your case, you want separate environments, so you need different project names.
You can use either docker-compose -p <project_name> or set COMPOSE_PROJECT_NAME in the environment.
There is also some discussion about providing a way to persist the project name: https://github.com/docker/compose/issues/745

Resources