How to fix 'Cookie file /var/lib/rabbitmq/.erlang.cookie must be accessible by owner only' error in windows server 2019 with DockerProvider service - docker

I'm installed docker in windows server 2019 with DockerProvider
I'm using this code
Install-Module DockerProvider
Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview
[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", "1", "Machine")
after that I install Docker-Compose with this code
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\Docker\docker-compose.exe
after that I use a docker compose file
version: "3.5"
services:
rabbitmq:
# restart: always
image: rabbitmq:3-management
container_name: rabbitmq
ports:
- 5672:5672
- 15672:15672
networks:
- myname
# network_mode: host
volumes:
- rabbitmq:/var/lib/rabbitmq
networks:
myname:
name: myname-network
volumes:
rabbitmq:
driver: local
everything is Ok up to here
but after i call http://localhost:15672/ url in my browser
rabbitmq crashes and I see this error in docker logs <container-id>
Cookie file /var/lib/rabbitmq/.erlang.cookie must be accessible by owner only
this .yml file is working correctly in docker for windows
but after running the file in windows server, I see this error

Solution is to map a different volume where the cookie file will be created;
https://github.com/docker-library/rabbitmq/issues/171#issuecomment-316302131
So for your example, not;
- rabbitmq:/var/lib/rabbitmq
but;
- rabbitmq:/var/lib/rabbitmq/mnesia

You also have the option to overwrite the command of the docker image to fix the issue it is complaining about. Assuming that your cookie file is /var/lib/rabbitmq/.erlang.cookie, replace the original docker image command, which is probably:
["rabbitmq-server"]
with:
["bash", "-c", "chmod 400 /var/lib/rabbitmq/.erlang.cookie; rabbitmq-server"]
In your docker-compose file it will look like this:
...
image: rabbitmq:3-management
...
ports:
- "5672:5672"
- "15672:15672"
volumes:
- ...
command: ["bash", "-c", "chmod 400 /var/lib/rabbitmq/.erlang.cookie; rabbitmq-server"]
Of course, you introduce here some workaround/technical debt that you assume rabbitmq-server will stay like that in the future.

Related

Can't run jenkins image in docker

I have just started to learn Docker.
I have tried to run jenkins in my docker.
I have tried the commands:
docker run jenkins ,
docker run jenkins:latest
But showing the error in the docker interactive shell:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: manifest for jenkins:latest not found: manifest unknown: manifest unknown.
You can run the container by using the command
docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
The documentation page is pretty good.
I would use a docker-compose file to
mount a volume for home to make it persistent
(in order to look into the build workspace you need to attach another container to it)
control the version programmatically
add docker client or other utilities installed later
add 'fixed' agents
docker compose file:
version: '3.5'
services:
jenkins-server:
build: ./JenkinsServer
container_name: jenkins
restart: always
environment:
JAVA_OPTS: "-Xmx1024m"
ports:
- "50000:50000"
- "8080:8080"
networks:
jenkins:
aliases:
- jenkins
volumes:
- jenkins-data:/var/jenkins_home
networks:
jenkins:
external: true
volumes:
jenkins-data:
external: true
dockerfile for server:
FROM jenkins/jenkins:2.263.2-lts
USER root

Unable to connect mysql from docker container?

I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.

Running shell script against Localstack in docker container

I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'

How do I convert this docker command to docker-compose?

I run this command manually:
$ docker run -it --rm \
--network app-tier \
bitnami/cassandra:latest cqlsh --username cassandra --password cassandra cassandra-server
But I don't know how to convert it to a docker compose file, specially the container's custom properties such as --username and --password.
What should I write in a docker-compose.yaml file to obtain the same result?
Thanks
Here is a sample of how others have done it. http://abiasforaction.net/apache-cassandra-cluster-docker/
Running the command below
command:
Setting arg's below
environment:
Remember just because you can doesn't mean you should.. Compose is not always the best way to launch something. Often it can be the lazy way.
If your running this as a service id suggest building the dockerfile to start and then creating systemd/init scripts to rm/relaunch it.
an example cassandra docker-compose.yml might be
version: '2'
services:
cassandra:
image: 'bitnami/cassandra:latest'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
although this will not provide you with your commandline arguments but start it with the default CMD or ENTRYPOINT.
As you are actually running another command then the default you might not want to do this with docker-compose. Or you can create a new Docker image with this command as the default and provide the username and password as ENV's
e.g. something like this (untested)
FROM bitnami/cassandra:latest
ENV USER=cassandra
ENV PASSWORD=password
CMD ["cqlsh", "--username", "$USER", "--password", "$PASSWORD", "cassandra-server"]
and you can build it
docker build -t mycassandra .
and run it with something like:
docker run -it -e "USER=foo" -e "PASSWORD=bar" mycassandra
or in docker-compose
services:
cassandra:
image: 'mycassandra'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
environment:
USER:user
PASSWORD:pass
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
You might looking for something like the following. Not sure if it is going to help you....
version: '3'
services:
my_app:
image: bitnami/cassandra:latest
command: /bin/sh -c cqlsh --username cassandra --password cassandra cassandra-server
ports:
- "8080:8080"
networks:
- app-tier
networks:
app-tier:
external: true

Set secomp to unconfined in docker-compose

I need to be able fork a process. As i understand it i need to set the security-opt. I have tried doing this with docker command and it works fine. However when i do this in a docker-compose file it seem to do nothing, maybe I'm not using compose right.
Docker
docker run --security-opt=seccomp:unconfined <id> dlv debug --listen=:2345 --headless --log ./cmd/main.go
Docker-compose
Setup
docker-compose.yml
networks:
backend:
services:
example:
build: .
security_opt:
- seccomp:unconfined
networks:
- backend
ports:
- "5002:5002"
Dockerfile
FROM golang:1.8
RUN go get -u github.com/derekparker/delve/cmd/dlv
RUN dlv debug --listen=:2345 --headless --log ./cmd/main.go
command
docker-compose -f docker-compose.yml up --build --abort-on-container-exit
Result
2017/09/04 15:58:33 server.go:73: Using API v1 2017/09/04 15:58:33
debugger.go:97: launching process with args: [/go/src/debug] could not
launch process: fork/exec /go/src/debug: operation not permitted
The compose syntax is correct. But the security_opt will be applied to the new instance of the container and thus is not available at build time like you are trying to do with the Dockerfile RUN command.
The correct way should be :
Dockerfile:
FROM golang:1.8
RUN go get -u github.com/derekparker/delve/cmd/dlv
docker-compose.yml
networks:
backend:
services:
example:
build: .
security_opt:
- seccomp:unconfined
networks:
- backend
ports:
- "5002:5002"
entrypoint: ['/usr/local/bin/dlv', '--listen=: 2345', '--headless=true', '--api-version=2', 'exec', 'cmd/main.go']

Resources