Executing logstash command outside of docker container - docker

I want to execute a Logstash command to start importing to Elasticsearch without entering the ELK Docker container.
This doesn't work:
docker exec -it docker_elk_1 opt/logstash/bin/logstash -f /home/configs/logstash-logs.config
Although it would show
Successfully started Logstash API endpoint {:port=>9600} but it would just exit after.
However, this would work, but I have to enter docker container first
docker exec -it docker_elk_1 bin/bash
Then
opt/logstash/bin/logstash -f /home/configs/logstash-logs.config
Thanks
docker-compose.yml
elk:
image: sebp/elk
volumes:
- ${PWD}:/home/configs
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"

I do not know if I understand...you can try with:
docker exec -it docker_elk_1 /bin/bash -c 'opt/logstash/bin/logstash -f /home/configs/logstash-logs.config'
or
docker-compose exec /bin/bash -c 'opt/logstash/bin/logstash -f /home/configs/logstash-logs.config'

Related

How do I run two commands as part of a single docker-compose "command"?

I’m using Docker v. 20.10.12. I have set up this container in my docker-compose file …
web:
restart: always
build: ./my-web
ports:
- "3000:3000"
expose:
- '3000'
env_file: ./my-web/.docker_env
command: rm -f /app/tmp/pids/*.pid && foreman start -f Procfile.hot
volumes:
- ./my-web/:/app
depends_on:
- db
When I start using “docker-compose up”, the above container always displays this message
my-web-1 exited with code 0
There is no other info before that. However, if I change the command to be
command: tail -F anything
And then log in to the docker container, I can run
rm -f /app/tmp/pids/*.pid && foreman start -f Procfile.hot
just fine without an errors. How would I run that as part of starting up my Docker container without having to do the manual steps from above?
You can do
command: /bin/sh -c "rm -f /app/tmp/pids/*.pid && foreman start -f Procfile.hot"
Try to put all your commands inside an external sh file.
in docker-compose use
command: /bin/sh -c "yourshfile.sh"

Starting docker containers

I have a docker-compose.yml file that starts two services: amazon/dynamodb-local on 8000 port and django-service. django-service runs tests that are dependent on dynamodb-local.
Here is working docker-compose.yml:
version: '3.8'
services:
dynamodb-local:
image: "amazon/dynamodb-local:latest"
container_name: dynamodb-local
ports:
- "8000:8000"
django-service:
depends_on:
- dynamodb-local
image: django-service
build:
dockerfile: Dockerfile
context: .
env_file:
- envs/tests.env
volumes:
- ./:/app
command: sh -c 'cd /app && pytest tests/integration/ -vv'
Now I need to run this without docker-compose, only using docker itself. I try to do following:
docker network create -d bridge net // create a network for dynamodb-local and django-service
docker run --network=net --rm -p 8000:8000 -d amazon/dynamodb-local:latest // run cont. att. to network
docker run --network=net --rm --env-file ./envs/tests.env -v `pwd`:/app django-service /bin/sh -c 'env && cd /app && pytest tests/integration -vv'
I can see that both services start, but I can't connect to the dynamo-db.
Where is the problem? Any comment or help is appreciated!
Through the docker-compose.yml, the amazon/dynamodb-local container has a name defined (container_name: dynamodb-local, If we do not set this property, docker-compose will use the service's name as container name). This enables other containers in the same network to address the container through its name.
In the docker-run command, we do not set an explicit container name. We can set an explicit container name by executing docker run ... --name dynamodb-local .... More details can be found in the corresponding docker run documentation.

How can you automatically run and remove a container in docker compose

I'm looking to forwarding my ssh-agent and found this
https://github.com/nardeas/ssh-agent
and the steps are the following
0. Build
Navigate to the project directory and launch the following command to build the image:
docker build -t docker-ssh-agent:latest -f Dockerfile .
1. Run a long-lived container
docker run -d --name=ssh-agent docker-ssh-agent:latest
2. Add your ssh keys
Run a temporary container with volume mounted from host that includes your SSH keys. SSH key id_rsa will be added to ssh-agent (you can replace id_rsa with your key name):
docker run --rm --volumes-from=ssh-agent -v ~/.ssh:/.ssh -it docker-ssh-agent:latest ssh-add /root/.ssh/id_rsa
The ssh-agent container is now ready to use.
3. Add ssh-agent socket to other container:
If you're using docker-compose this is how you forward the socket to a container:
volumes_from:
- ssh-agent
environment:
- SSH_AUTH_SOCK=/.ssh-agent/socket
in a compose file, I add step 1 to it like so:
services:
ssh_agent:
image: nardeas/ssh-agent
However I do not what's the equivalent syntax in compose file for step 2
docker run --rm --volumes-from=ssh-agent -v ~/.ssh:/.ssh -it docker-ssh-agent:latest ssh-add /root/.ssh/id_rsa
You can do it as below -
docker-compose -f my-docker-compose.yml run --rm ssh_agent bash -c "ssh-add /root/.ssh/id_rsa"
Ref - https://docs.docker.com/compose/reference/run/
docker-compose.yml file will be
services:
ssh_agent:
image: docker-ssh-agent:latest
command: ssh-add /root/.ssh/id_rsa
volumes_from:
- ssh-agent
environment:
- SSH_AUTH_SOCK=/.ssh-agent/socket
volumes:
- ~/.ssh:/.ssh
then run the docker-compose command as below
docker-compose -f docker-compose.yml run --rm ssh_agent

Calling redis-cli in docker-compose setup

I run the official Redis image https://hub.docker.com/_/redis/ in a docker-compose setup.
myredis:
image: redis
How can run redis-cli with docker-compose on that image?
I tried the following, but it didn't connect:
docker-compose run myredis redis-cli
> Could not connect to Redis at 127.0.0.1:6379: Connection refuse
The docs of the image says that I should run:
docker run -it --rm \
--link some-redis:redis \
redis \
sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
How does this translate to docker-compose run?
That would override the default CMD [ "redis-server" ]: you are trying to run redis-cli on a container where the redis-server was never executed.
As mentioned here, you can also test with:
docker exec -it myredis redis-cli
From docker-compose, as mentioned in this docker/compose issue 2123:
rcli:
image: redis:latest
links:
- redis
command: >
sh -c 'redis-cli -h redis '
This should also works:
rcli:
image: redis:latest
links:
- redis
command: redis-cli -h redis
As the OP ivoba confirms (in the comments), the last form works.
Then:
docker-compose run rcli
ivoba also adds:
docker-compose run redis redis-cli -h redis works also when the containers are running.
This way its not necessary to declare a separate rcli container.
You can also use this command:
docker-compose run myredis redis-cli -h myredis
I followed as #VonC suggest, but in my case I run redis on predefined network so it did not worked.
So in the case redis container run in specific network, network field should be specified in docker-compose.yaml file
rcli:
image: redis:latest
links:
- redis
command: redis-cli -h redis
networks:
- <network redis run on>

Execute command in linked docker container

Is there any way posible to exec command from inside one docker container in the linked docker container?
I don't want to exec command from the host.
As long as you have access to something like the docker socket within your container, you can run any command inside any docker container, doesn't matter whether or not it is linked. For example:
# run a container and link it to `other`
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock \
--link other:other myimage bash -l
bash$ docker exec --it other echo hello
This works even if the link was not specified.
With docker-compose:
version: '2.1'
services:
site:
image: ubuntu
container_name: test-site
command: sleep 999999
dkr:
image: docker
privileged: true
working_dir: "/dkr"
volumes:
- ".:/dkr"
- "/var/run/docker.sock:/var/run/docker.sock"
command: docker ps -a
Then try:
docker-compose up -d site
docker-compose up dkr
result:
Attaching to tmp_dkr_1
dkr_1 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dkr_1 | 25e382142b2e docker "docker-entrypoint..." Less than a second ago Up Less than a second tmp_dkr_1
Example Project
https://github.com/reduardo7/docker-container-access
As "Abdullah Jibaly" said you can do that but there is some security issues you have to consider, also there is sdk docker to use, and for python applications can use Docker SDK for Python

Resources