I created a Docker container:
sudo docker pull microsoft/mssql-server-linux:2017-latest
Then I ran it:
sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=root' -p 1401:1433
--name sqlserver1 -d microsoft/mssql-server-linux:2017-latest
I ran:
docker start sqlserver1
After about 3 seconds docker ps returns empty - making me think the container is shutting down.
I'm new to Docker - is this really shutting down automatically? If so, how do I prevent that?
I gave this a shot, and it looks as if your problem is not a Docker problem...it's simply a MSSQL problem. If you look at the logs for your container, you'll see:
ERROR: Unable to set system administrator password: Password validation failed.
The password does not meet SQL Server password policy requirements because it is
too short. The password must be at least 8 characters.
It looks as if MSSQL enforces password complexity requirements, which include length and number of character classes. The following seems to work fine:
docker run -it -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=r00t.R00T' -p 1401:1433 --name sqlserver1 microsoft/mssql-server-linux:2017-latest
Related
I am using two different Docker images of Elasticsearch on two different projects:
Project 1: docker.elastic.co/elasticsearch/elasticsearch:6.8.6
Project 2: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
It works, but I have noted a weird behaviour, when I start the one with the 6.8.6 version the other crashes:
f35d8b319ec0 docker.elastic.co/elasticsearch/elasticsearch:5.6.8 "/bin/bash bin/es-do…" 3 hours ago Exited (137) Less than a second ago
If doing a docker-compose up, Docker tries to restart it but without success (same message).
Now If I do a composer down on the other project, then the container with the 5.6.8 version can work again:
f35d8b319ec0 docker.elastic.co/elasticsearch/elasticsearch:5.6.8 "/bin/bash bin/es-do…" 3 hours ago Up 12 seconds (healthy) 9300/tcp, 0.0.0.0:9203->9200/tcp
Of course, these two containers forward the Elasticsearch to two different ports 9203 and 9209.
I found something suspicious while writing this question; both containers use the same transport port:
9300/tcp, 0.0.0.0:9209->9200/tcp
9300/tcp, 0.0.0.0:9203->9200/tcp
Could the problem come from this setting? And how to fix this?
I had this same question, and #Sai Kumar's accepted answer was not working for me initially. I finally stumbled across ElasticSearch on docker - 2nd instance kills the first instance which prompted me to override memory defaults, preventing the respective elasticsearch instances from each gobbling up 2GB of RAM by default.
Full sequence that worked for me:
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.2
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.4.1
docker run -d --restart=always --name elasticsearch1 -p 9201:9200 -p 9301:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:6.3.2
docker run -d --restart=always --name elasticsearch2 -p 9202:9200 -p 9302:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:6.4.1
I'm now able to verify both containers running at http://localhost:9201/ and http://localhost:9202/ respectively.
EDIT: I was able to achieve the same result by simply going into Docker desktop's Settings > Resources area and upping the Memory (in my case from 2G to 4G).
So evidently the same symptom originally posted by #Coil above (that of endless crash-restart ping-pong between two elasticsearch container instances) can also be the result of memory settings, rather than port mappings or other settings in your docker run command.
Change the binding the tcp ports to dockers
docker run -d --name elasticsearch1 --net somenetwork -p 9201:9200 -p 9301:9300 -e "discovery.type=single-node" elasticsearch:tag
docker run -d --name elasticsearch2 --net somenetwork -p 9202:9200 -p 9302:9300 -e "discovery.type=single-node" elasticsearch:tag
I am using two images sheepkiller/kafka-manager/ (Tool from Yahoo Inc) but the image was made by someone with a weird sense of humor but it has good reviews.
And zookeeper
I start ZooKeeper
docker run --it --restart always -d zookeeper
then try to start apache manager
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="your-zk.domain:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Document says:
(if you don't define ZK_HOSTS, default value has been set to "localhost:2181")
Error:
Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#7bf272d3
[info] o.a.z.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
[info] k.m.a.KafkaManagerActor - zk=localhost:2181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[warn] o.a.z.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I am using Docker version 17.12.0-ce, build c97c6d6 on windows 10. I have tried several different things but was unsuccessful. I am assuming there is an issue with the ports, I zookeeper config file and /sheepkiller/kafka-manager/dockerfile/ but I am not sure how to change these images after I already pulled them if that really is the case.
The following should work fine:
docker network create zookeeper-net
docker run -it --restart always -p 2181:2181 --network zookeeper-net --name zookeeper -d zookeeper
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="zookeeper:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Update:
There is also a compose file to setup everything. I suggest you use that.
docker-compose up -d
I am trying to create a rabbitmq docker container with default user and password but when I try to enter to the management plugin those credentials doesn't work
This is how I create the container:
docker run -d -P --hostname rabbit -p 5009:5672 -p 5010:15672 --name rabbitmq -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=pass -v /home/desarrollo/rabbitmq/data:/var/lib/rabbitmq rabbitmq:3.6.10-management
What am I doing wrong?,
Thanks in advance
The default user is created only if the database does not exist. Therefore the environment variables have no effect if the volume already exists.
I had the same problem when trying to access in Chrome. Firefox worked fine. The culprit turned out to be a deprecated JS method that was no longer allowed by Chrome.
I'm going to have simple presentation showing the concept of IDS/IPS
To deploy as soon, comfortable as possible, I was trying to take advantage of container - docker.
So, I chose two docker images - polinux/snorby and million12/mariadb.
As manual image maintainer provided, I tried.
Finally, I could make login page show up, but couldn't go further.
Cannot login, just stuck this page.
command I used is :
docker run -d --name snorby-db -p 3306:3306 --env="MARIADB_USER=snorbyuser" --env="MARIADB_PASS=password" million12/mariadb && \
docker run -d --name snorby -p 80:80 --env="DB_ADDRESS=127.0.0.1:3306" --env="DB_USER=snorbyuser" --env="DB_PASS=password" polinux/snorby -e development -p 80
How can I login and see the logs collected by snort daemon?
At least, where can I have reference to fix this up?
Thank you.
Use this command to connect into a container:
docker exec -it <container name> bash
The default password for Snorby:
Username: snorby#snorby.org
Password: snorby
or
user : snorby#example.com
password :snorby
When i run a new docker container (or when i start a stopped one), the system logs me out. When i login again the container is up 'n running and i can use it without any problems.
I am currently using Fedora 24.
Example:
alekos#alekos-pc:~$ docker run -d --name somename -t -p 80:80 someimage
At this point it logs me out
When I log in again I run:
alekos#alekos-pc:~$ docker ps -a
and my "somename" container is running without any further problems.
I can live with these logouts/logins but it is a bit annoying. Does anybody have any idea what is causing the problem?