I'm using Rancher over Kubernetes to create our test/dev environment. First of all, it's a great tool and I'm amazed of how it simplify the management of such environments.
That said, I have an issue (which is probably more a knowledge lack of Rancher). I try to automate the deployment via Jenkins, and as we will have several stacks into our test environment, I want to dynamically update the loadbalancer instances to add routes for new environement from Jenkins with Rancher CLI.
At the moment, I just try to run this command :
rancher --url http://myrancher_server:8080 --access-key <key> --secret-key <secret> --env dev-test stack create kubernetes-ingress-lbs -r loadbalancer-rancher-service.yml
My docker-compose.yml file is like the following :
version: '2'
services:
frontend:
image: 172.19.51.97:5000/frontend
dev-test-lb:
image: rancher/load-balancer-service
ports:
- 82: 8086
links:
- fronted:frontend
My rancher compose file is like this:
version: '2'
services:
dev-test-lb:
scale: 4
lb_config:
port_rules:
- source_port: 82
path: /products
target_port: 8086
service: products
- source_port: 82
path: /
target_port: 4201
service: frontend
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
Now when I execute this I have the following response :
Bad response statusCode [422]. Status [422 status code 422]. Body: [code=NotUnique, fieldName=name, baseType=error] from [http://myrancher_server:8080/v2-beta/projects/1a21/stacks]
Obviously I can't edit an existing stack with a service that already exsit. Do you know if it's best practice do to this like that ? I checked man, and I only see the "create" action on "rancher stack", so I'm wondering if we can update ?
My rancher server is v1.5.10 and all my rancher agents and Kubernetes drivers are up-to-date.
Thanks a lot for your help fellows :)
Ok, just for the information, I found that this is possible via the Rest API of Rancher.
Check the following link : http://docs.rancher.com/rancher/v1.2/en/api/v2-beta/api-resources/service/
I didn't found that at first 'cause the Googling I've done around was all about rancher cli at first. But as it's still beta, we can't do the same stuff as via the rest API.
Basically, just send an update resource query :
PUT rancherserver/v2-beta/projects/1a12/services/
{
"description": "Loadbalancer for our test env",
"lbConfig": {
"portRules": [
{
"hostname": "",
"protocol": "http",
"source_port": "80",
"targetPort": "4200",
"path": "/"
}
]
},
"name": "kubernetes-ingress-lbs"
}
Related
I want to run a Neo4J instance through docker using a docker-compose.
docker-compose.yml
version: '3'
services:
neo4j:
container_name: neo4j-lab
image: neo4j:latest
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- NEO4J_dbms_memory_heap_maxSize=4G
- NEO4J_dbms_memory_heap_initialSize=512M
- NEO4J_AUTH=neo4j/changeme
ports:
- 7474:7474
- 7687:7687
volumes:
- neo4j_data:/data
- neo4j_conf:/conf
- ./import:/import
volumes:
neo4j_data:
neo4j_conf:
Running the following with docker-compose up is perfectly fine, and I can reach the login screen.
But when I set the credentials, I get the following error on my container logs : Neo.ClientError.Security.Unauthorized The client is unauthorized due to authentication failure. whereas I am sure that I fill with right credentials (the ones used in my docker-compose file)
Furthermore,
when I set NEO4J_AUTH to none, then no credentials have been asked.
when I set it to neo4j/neo4j it said that I can't use the default password
According the documentation, this is perfectly fine :
By default Neo4j requires authentication and requires you to login with neo4j/neo4j at the first connection and set a new password. You can set the password for the Docker container directly by specifying --env NEO4J_AUTH=neo4j/password in your run directive. Alternatively, you can disable authentication by specifying --env NEO4J_AUTH=none instead.
Do you have any idea of what's going on ?
Hope you could help me to solve this !
EDIT
Docker logs output :
neo4j-lab | 2019-03-13 23:02:32.378+0000 INFO Starting...
neo4j-lab | 2019-03-13 23:02:37.796+0000 INFO Bolt enabled on 0.0.0.0:7687.
neo4j-lab | 2019-03-13 23:02:41.102+0000 INFO Started.
neo4j-lab | 2019-03-13 23:02:43.935+0000 INFO Remote interface available at http://localhost:7474/
neo4j-lab | 2019-03-13 23:02:56.105+0000 WARN The client is unauthorized due to authentication failure.
EDIT 2 :
It seems that deleting the volume associated first works. The password is now changed.
However, if I docker-compose down then docker-compose up whereas I change the password in my docker-compose file then the issue reappears.
So I think that when we change the password through docker-compose more than once while a volume exists, we need to remove the auth file presents in the volumes.
To do that :
docker volume inspect <volume_name>
You should get something like that :
[
{
"CreatedAt": "2019-03-14T11:17:08+01:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "neo4j",
"com.docker.compose.volume": "neo4j_data"
},
"Mountpoint": "/data/docker/volumes/neo4j_neo4j_data/_data",
"Name": "neo4j_neo4j_data",
"Options": null,
"Scope": "local"
}
]
This is obviously different if you named your container and your volumes not like me (neo4j, neo4j_data).
The important part is the Mountpoint which locates the volume.
In this volume, you can delete the auth file which is in dbms directory.
Then restart your docker and everything should be fine.
Neo4j docker developer here.
The reason this is happening is that the NEO4J_AUTH environment variable doesn't set the database password, it sets the INITIAL password only.
If you're mounting a data volume with an existing database inside, then NEO4J_AUTH has no effect because that database already has a password. It sounds like that's what you're experiencing here.
The documentation around this feature was not great and I've updated it! See: Neo4j docker authentication documentation
define Neo4j password with docker-compose
neo4j:
image: 'neo4j:4.1'
environment:
NEO4J_AUTH: 'neo4j/your_password'
ports:
- "7474:7474"
volumes:
...
I have a Spring Boot 2.x project using Mongo. I am running this via Docker (using compose locally) and Kubernetes. I am trying to connect my service to a Mongo server. This is confusing to me, but for development I am using a local instance of Mongo, but deployed in GCP I have named mongo services.
here is my application.properties file:
#mongodb
spring.data.mongodb.uri= mongodb://mongo-serviceone:27017/serviceone
#logging
logging.level.org.springframework.data=trace
logging.level.=trace
And my Docker-compose:
version: '3'
# Define the services/containers to be run
services:
service: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3009:3009" #specify ports forwarding
links:
- mongo-serviceone # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- mongo-serviceone
mongo-serviceone: # name of the service
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
When I try docker-compose up . I get the following error:
mongo-serviceone_1 | 2018-08-22T13:50:33.454+0000 I NETWORK
[initandlisten] waiting for connections on port 27017 service_1
| 2018-08-22 13:50:33.526 INFO 1 --- [localhost:27017]
org.mongodb.driver.cluster : Exception in monitor thread
while connecting to server localhost:27017 service_1
| service_1 | com.mongodb.MongoSocketOpenException:
Exception opening socket service_1 | at
com.mongodb.connection.SocketStream.open(SocketStream.java:62)
~[mongodb-driver-core-3.6.3.jar!/:na]
running docker ps shows me:
692ebb72cf30 serviceone_service "java -Djava.securit…" About an hour ago Up 9 minutes 0.0.0.0:3009->3009/tcp, 8080/tcp serviceone_service_1
6cd55ae7bb77 mongo "docker-entrypoint.s…" About an hour ago Up 9 minutes 0.0.0.0:27017->27017/tcp serviceone_mongo-serviceone_1
While I am trying to connect to a local mongo, I thought that by using the name "mongo-serviceone"
Hard to tell what the exact issue is, but maybe this is just an issue because of the space " " after "spring.data.mongodb.uri=" and before "mongodb://mongo-serviceone:27017/serviceone"?
If not, maybe exec into the "service" container and try to ping the mongodb with: ping mongo-serviceone:27017
Let me know the output of this, so I can help you analyze and fix this issue.
Alternatively, you could switch from using docker compose to a Kubernetes native dev tool, as you are planning to run your application on Kubernetes anyways. Here is a list of possible tools:
Allow hot reloading:
DevSpace: https://github.com/covexo/devspace
ksync: https://github.com/vapor-ware/ksync
Pure CI/CD tools for dev:
Skaffold: https://github.com/GoogleContainerTools/skaffold
Draft: https://github.com/Azure/draft
For most of them, you will only need minikube or a dev namespace inside your existing cluster on GCP.
Looks like another application was running on port 27017 on your localhost Similar reported issue
quick way to check on linux/mac:
telnet 127.0.01 27017
check logs files:
docker logs serviceone_service
I couldn't find such a specific command around the internet so I kindly ask for your help with this one :)
Context
I have defined a podTemplate with a few containers, by using the containerTemplate methods:
ubuntu:trusty (14.04 LTS)
postgres:9.6
and finally, wurstmeister/kafka:latest
Doing some Groovy coding in Pipeline, I install several dependencies into my ubuntu:trusty container, such as latest Git, Golang 1.9, etc., and I also checkout my project from Github.
After all that dependencies are dealt with, I manage to compile, run migrations (which means Postgres is up and running and my app is connected to it), and spin up my app just fine until it complains that Kafka is not running because it couldn't connect to any broker.
Debugging sessions
After some debug sessions I have ps aux'ed each and every container to make sure all the services I needed were running in their respective containers, such as:
container(postgres) {
sh 'ps aux' # Show Postgres, as expected
}
container(linux) {
sh 'ps aux | grep post' # Does not show Postgres, as expected
sh 'ps aux | grep kafka' # Does not show Kafka, as expected
}
container(kafka) {
sh 'ps aux' # Does NOT show any Kafka running
}
I have also exported KAFKA_ADVERTISED_HOST_NAME var to 127.0.0.1 as explained in the image docs, without success, with the following code:
containerTemplate(
name: kafka,
image: 'wurstmeister/kafka:latest',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'KAFKA_ADVERTISED_HOST_NAME', value: '127.0.0.1'),
envVar(key: 'KAFKA_AUTO_CREATE_TOPICS_ENABLE', value: 'true'),
]
)
Questions
This image documentation details https://hub.docker.com/r/wurstmeister/kafka/ is explicit about starting a Kafka cluster with docker-compose up -d
1) How do I actually do that with this Kubernetes plugin + Docker + Groovy + Pipeline combo in Jenkins?
2) Do I actually need to do that? Postgres image docs (https://hub.docker.com/_/postgres/) also mentions about running the instance with docker run, but I didn't need to do that at all, which makes me think that containerTemplate is probably doing it automatically. So why is it not doing this for the Kafka container?
Thanks!
So... problem is with this image, and way how kubernetes works with them.
Kafka does not start because you override dockers CMD with command:'cat' which causes start-kafka.sh to never run.
Because of above I suggest using different image. Below template worked for me.
containerTemplate(
name: 'kafka',
image: 'quay.io/jamftest/now-kafka-all-in-one:1.1.0.B',
resourceRequestMemory: '500Mi',
ttyEnabled: true,
ports: [
portMapping(name: 'zookeeper', containerPort: 2181, hostPort: 2181),
portMapping(name: 'kafka', containerPort: 9092, hostPort: 9092)
],
command: 'supervisord -n',
envVars: [
containerEnvVar(key: 'ADVERTISED_HOST', value: 'localhost')
]
),
The Problem
When I deploy a 4 peer nodes with PBFT or NOOPS in the cloud, any user certificate attributes are not seen. The values are blank.
Observations
Everything works locally. This suggests that I am calling the API correctly, and the chaincode is accessing attributes correctly.
When I attach to the membership container, I see the correct membersrvc.yaml with aca.enabled set to true. This is the same yaml that works locally. For good measure, I'm also passing the ENV variable MEMBERSRVC_CA_ACA_ENABLED=true.
I can see the attributes for the users in the membership service's ACA database. (suggesting that the users were created with attributes)
When I look at the actual certificate from the log (Bytes to Hex then Base64 decode) I see the attributes. (Appending certificate [30 82 02 dd 30 8....)
All attributes are blank when deployed. No errors.
Membership Service Logs
I enabled debug logging, and see that Membership services thinks it's enabled ACA:
19:57:46.421 [server] main -> DEBU 049 ACA was enabled [aca.enabled == true]
19:57:46.421 [aca] Start -> INFO 04a Staring ACA services...
19:57:46.421 [aca] startACAP -> INFO 04b ACA PUBLIC gRPC API server started
19:57:46.421 [aca] Start -> INFO 04c ACA services started
This looks good. What am I missing?
Guess
Could it be that the underlying docker container the chaincode deploys into doesn't have security enabled? Does it use the ENV passed to the parent peer? One difference is that locally I'm using "dev mode" without the base-image shenanigans.
Membership Service
membersrvc:
container_name: membersrvc
image: hyperledger/fabric-membersrvc
volumes:
- /home/ec2-user/membership:/user/membership
- /var/hyperledger:/var/hyperledger
command: sh -c "cp /user/membership/membersrvc.yaml /opt/gopath/src/github.com/hyperledger/fabric/membersrvc && membersrvc"
restart: unless-stopped
environment:
- MEMBERSRVC_CA_ACA_ENABLED=true
ports:
- 7054:7054
Root Peer Service
rootpeer:
container_name: root-peer
image: hyperledger/fabric-peer
restart: unless-stopped
environment:
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp1
- CORE_SECURITY_ENROLLID=vp1
- CORE_SECURITY_ENROLLSECRET=xxxxxxxx
- CORE_SECURITY_ENABLED=true
- CORE_SECURITY_ATTRIBUTES_ENABLED=true
- CORE_PEER_PKI_ECA_PADDR=members.x.net:7054
- CORE_PEER_PKI_TCA_PADDR=members.x.net:7054
- CORE_PEER_PKI_TLSCA_PADDR=members.x.net:7054
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=NOOPS
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/hyperledger:/var/hyperledger
command: sh -c "peer node start"
ports:
- 7051:7051
- 7050:7050
Here's the request:
{
"jsonrpc": "2.0",
"method":"query",
"params": {
"chaincodeID": {
"name" :"659cb5dcc3063054e4c90908050eebf68eb2bd193cc1520f1f2d198f0ff42268"
},
"ctorMsg": {
"args":["get_results", "{\"Id\":\"abc123\"}"]
},
"secureContext": "user123",
"attributes":["account_id","role"]
},
"id": 2
}
Edited*: I previously thought this was just PBFT...but it's also happening on NOOPS on the cloud. I reduced the example to NOOPS.
My problem is that the fabric version inside the fabric-baseimage docker container is a good bit newer. This is my fault - because I populated that image with the fabric version manually.
Background
If one is using non-vagrant with 0.6 and not in DEV mode, deploying chaincode will have a "cannot find :latest tag" error. To solve this, I pulled a fabric-baseimage version, and populated it with what I needed, including a git-clone of fabric. I should have pulled the 0.6 branch, but instead it was pulling master.
So essentially, my fabric-peer, node-sdk deployer, and baseimage were using slightly different hyperledger versions.
After about 48 hours of configuration hell, I think I have it straightened out by sending everything back to 0.6. I have terraform spinning everything up successfully now.
I do wish the documentation included something about deploying in a non-dev multi-node environment.
I'm now using docker-compose for all of my projects. Very convenient. Much more comfortable than manual linking through several docker commands.
There is something that is not clear to me yet though: the logic behind the linking environment variables.
Eg. with this docker-compose.yml:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
In the node app, I need to retrieve the mongodb url. And if I console.log(process.env), I get so many things that it feels very random (just kept the docker-compose-related ones):
MONGODB_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PORT: '27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
MONGODB_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_GOSU_VERSION: '1.7',
'MYAPP_MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GOSU_VERSION: '1.7',
MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_1_NAME: '/myapp_web_1/mongodb_1',
MONGODB_1_PORT_27017_TCP_PORT: '27017',
MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
'MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MONGODB_PORT: 'tcp://172.17.0.2:27017',
MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_ENV_GOSU_VERSION: '1.7',
MONGODB_ENV_MONGO_MAJOR: '3.2',
MONGODB_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_NAME: '/myapp_web_1/mongodb',
MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_PORT_27017_TCP_PORT: '27017',
MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_PORT_27017_TCP_PROTO: 'tcp',
MYAPP_MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
'MONGODB_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MYAPP_MONGODB_1_NAME: '/myapp_web_1/novatube_mongodb_1',
Don't know what to pick, and why so many entries? Is it better to use the general ones, or the MYAPP-prefixed one? Where does the MYAPP name comes from? Folder name?
Could someone clarify this?
Wouldn't it be easier to let the user define the ones he needs in the docker-compose.yml file with a custom mapping? Like:
links:
- mongodb:
- MONGOIP: IP
- MONGOPORT : PORT
What I'm saying might not have sense though. :-)
Environment variables are a legacy way of defining links between containers. If you are using a newer version of compose, you don't need the links declaration at all. Trying to connect to mongodb from your app container will work fine by just using the name of the service (mongodb) as a hostname, without any links defined in the compose file (instead using docker's builtin DNS resolution, check /etc/hosts, nothing in there either!)
In answer to your question about why the prefix with MYAPP, you're right. Compose prefixes the service name with the name of the folder (or 'project', in compose nomenclature). It does the same thing when creating custom networks and volumes.