SQL Server Containerization in Azure CLI - docker

I would like to create a SQL Server container in Azure CLI with the following statement:
az container create --resource-group rgtest \gt; --name db_test > --image mcr.microsoft.com/mssql/server:2017-CU16-ubuntu > --environment-variables ACCEPT_EULA=Y MSSQL_SA_PASSWORD= password_test > --dns-name-label dns_test > --cpu 2 > --memory 2 > --port 1433
I was expecting a JSON output that contains all the details and properties of container, but unfortunately, I am not getting anything returned. Am I doing anything wrong?

There are some format mistakes in the command, you could use it like this:
az container create --resource-group rgtest --name dbtest --image mcr.microsoft.com/mssql/server:2017-CU16-ubuntu --environment-variables ACCEPT_EULA=Y MSSQL_SA_PASSWORD=password_test --dns-name-label dbtest --cpu 2 --memory 2 --port 1433
More reference: az container create

I found the solution. Actually, for MSSQL_SA_PASSWORD, I had provided some special characters (symbols) to make it strong. Because of these special characters, the command was not working. Once I removed those, the command is working like a champ.

Related

Accessing GitLab CI Service from A Container running Inside DinD

I'm trying to run a continuous integration in GitLab CI consisting of:
build the docker image
run tests
push the docker image to a registry
Those are running inside one job. I can do it without any problem until come up some test that needs to communicate with database. My container can't communicate with Postgres services defined.
I've reproduce it in a public repository with simple ping script
image: docker:stable
services:
- docker:dind
- postgres:latest
job1:
script:
- ping postgres -c 5
- docker run --rm --network="host" alpine:latest sh -c "ping postgres -c 5"
The first script could run without any problem, but the second one failed with error
ping: bad address 'postgres'
How can I access the service?
Or should I run the test in a different job?
The solution is to use --add-host=postgres:$POSTGRES_IP to pass over the ip address present in job container.
To find out postgres ip linked to the outer container you can use for example getent hosts postgres | awk '{ print $1 }'
So the yml would look like
image: docker:stable
services:
- docker:dind
- postgres:latest
job1:
script:
- ping postgres -c 5
- docker run --rm --add-host=postgres:$(getent hosts postgres | awk '{ print $1 }') alpine:latest sh -c "ping postgres -c 5"
To understand why the other more common ways to connect containers wont work in this case, we have to remember we are trying to link a nested container with a service linked to its "parent". Something like this:
gitlab ci runner --> docker -> my-container (alpine)
-> docker:dind
-> postgres
So we are trying to connect a container with its "uncle". Or connecting nested containers
As noted by #tbo, using --network host will not work. This is probably because gitlab ci use --link (as explained here) to connect containers instead of the newer --network. The way --link works makes that the services containers are connected to the job container, but not connected with one another. So using host network wont make the nested container inherit postgres hostname.
One could also think that using --link postgres:postgres would work, but it also won't as in this environment postgres is only a hostname with the ip of the container outside. There is not container here to be linked with the nested container
So all we can do is manually add a host with the correct ip to the nested container using --add-host as explained above.

Cannot connect Spring Boot application to Docker Mysql container - unknown database

In docker a created network
docker network create mysql-network
Then I create mysql image
docker container run -d -p 3306:3306 --net=mysql-network --name mysql-hibernate -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=test -v hibernate:/var/lib/mysql mysql
When I run docker ps everything seems OK
This is my application.properties
spring.jpa.hibernate.ddl-auto=create
useSSL=false
spring.datasource.url=jdbc:mysql://localhost:3306/test
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL57Dialect
I also tried
spring.datasource.url=jdbc:mysql://mysql-hibernate:3306/test
But I will always get an error on startup
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'test'
How's that possible that it doesn't know database 'test' ? I specified name in docker like this -e MYSQL_DATABASE=test
What am I missing ?
I know it is bit late but I'll answer anyway so people coming here can benefit ;)
Your configuration overall seems alright. When you get error like this you can add flag param set to true in your application.properties in line where you set datasource url.
So you will come up with something like this:
spring.datasource.url=jdbc:mysql://localhost:3306/test?createDatabaseIfNotExist=true
Hope this helps!

Connect Kibana container with Elasticsearch

I've a VM which contains Docker and Elasticsearch (OS: Centos7). I would like to create a Kibana docker and connect with my ES.
The ES contains indices, if I type curl -s http://localhost:9200/_cat/indices I got the list of indices.
I used Dockerfile to create my Kibana image:
docker build -t="kibana_test" .
docker run --name kibana -e
ELASTICSEARCH_URL=http://#IP:9200 -e
XPACK_SECURITY_ENABLED=false -p 5600:5601 -d kibana_test
Well, if I put the address IP of my machine, I got this :
plugin:elasticsearch#6.2.4 Request Timeout after 3000ms
And in my Docker logs I got thi message:
License information from the X-Pack plugin could not be obtained from
Elasticsearch for the [data] cluster
How can I resolve this problem ?
Thanks for advance!
So, configure in elasticsearch.yml file.
network.host: 0.0.0.0
transport.host: localhost
transport.tcp.port: 9300
Then restart elasticsearh service first,
When build kibana container :
use this:
-e ELASTICSEARCH_URL=http://172.17.0.1:9200
check again.

How to modify the password of elasticsearch in docker

I want to modify the password of the container created by the elasticsearch image,I have executed the following orders
setup-passwords auto
but it did't work
enter image description here
unexpected response code [403] from GET http://172.17.0.2:9200/_xpack/security/_authenticate?pretty
Please help me. Thank you.
When using docker it is usually best to configure services via environment variables. To set a password for the elasticsearch service you can run the container using the env variable ELASTIC_PASSWORD:
docker run -e ELASTIC_PASSWORD=`openssl rand -base64 12` -p 9200:9200 --rm --name elastic docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
openssl rand -base64 12 sets a random value for the password

How to use SERVICE_CHECK_HTTP with progrium/consul check-http script?

I am running the progrium/consul container with the gliderlabs/registrator container. I would like to be able to automatically create health checks for any container that is registered to consul with the registrator. Using this I would like to use consul health checks to know if any container has stopped running. I have read that there is a way to do this by adding environmental variables, but everything I have read has been far too vague, such as the post below:
how to define HTTP health check in a consul container for a service on the same host?
So I am supposed to set some environmental variables:
ENV SERVICE_CHECK_HTTP=/howareyou
ENV SERVICE_CHECK_INTERVAL=5s
Do I set them inside of my progrium/consul container or my gliderlabs/registrator? Would I set them by just adding the following tags inside my docker run command like this?
docker run ...... -e SERVICE_CHECK_HTTP=howareyou -e SERVICE_CHECK_INTERVAL=5s ......
Note: for some reason adding the above environmental variables to the docker run commands of my registrator just caused consul to think my nodes are failing from no acks received
I got Consul Health Checks and Gliderlabs Registrator working in three ways with my Spring Boot apps:
Put the environment variables in the Dockerfile with ENV or LABEL
Put the environment variables using -e with docker run
Put the environment variables into docker-compose.yml under "environment" or "labels"
Dockerfile
In your Dockerfile-file:
ENV SERVICE_NAME MyApp
ENV SERVICE_8080_CHECK_HTTP /health
ENV SERVICE_8080_CHECK_INTERVAL 60s
The /health endpoint here is coming from the Spring Boot Actuator lib that I simply put in my pom.xml file in my Spring Boot application. You can however use any other endpoint as well.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
docker run
docker run -d -e "SERVICE_NAME=myapp" -e "SERVICE_8080_CHECK_HTTP=/health" -e "SERVICE_8080_CHECK_INTERVAL=10s" -p 8080:8080 --name MyApp myapp
Make sure that you are using the correct HTTP server port and that it is accessible. In my case, Spring Boot uses 8080 by default.
Docker Compose
Add the health check information under either the "environment" or "labels" properties:
myapp:
image: apps/myapp
restart: always
environment:
- SERVICE_NAME=MyApp
- SERVICE_8080_CHECK_HTTP=/health
- SERVICE_8080_CHECK_INTERVAL=60s
ports:
- "8080:8080"
Starting Consul Server
docker run -d -p "8500:8500" -h "consul" --name consul gliderlabs/consul-server -server -bootstrap
The "gliderlabs/consul-server" image activates the Consul UI by default. So you don't have to specify any other parameters.
Then start Registrator
docker run -d \
--name=registrator \
-h $(docker-machine ip dockervm) \
-v=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:v6 -resync 120 -deregister on-success \
consul://$(docker-machine ip dockervm):8500
The "resync" and "deregister" parameters will ensure that Consul and Registrator will be in synch.

Resources