Is it possible to create a replication controller and service for a containerized application using one configuration file (yml/json)
yes, you can have a normal yaml array of objects under List type, typical example can be found in the main repo like https://raw.githubusercontent.com/kubernetes/kubernetes/master/hack/testdata/list.yaml
Related
Specifying custom configuration for load balanced services is possible through the use the "LoadBalancerClient" & "LoadBalancerClients" annotations as illustrated below.
https://docs.spring.io/spring-cloud-commons/docs/current/reference/html/#custom-loadbalancer-configuration
How can we specify the same config through Java? We have a case where the services can increase dynamically and we don't want to keep modifying code to add them. Their load balancer configs will remain similar except for the service instances. We are looking to add a generic custom config which can then return the supplier list depending on the service name.
Declaring a bean of type LoadBalancerClientFactory containing the list of all applicable LoadBalancerClientSpecification did the trick.
Pretty straightforward but had to dig around to figure out which bean to expose as there was no example that I could find.
We have a SpringBoot Web app docker container deployed in Kubernetes with 3 replicas. when the controller redirects to a different url within the same controller, we pass an object via the flashAttributes. When we run 1 pod, everything works. But when I scale to 3 pods, the object comes with all internal attributes set to Null. Has anyone come across the issue ? If so prescribe a solution ?
Thanks,
SR
Kubernetes can send multiple requests from a session to different pod within a deployment. That's why the data is lost because the data might in memory for one pod but another pod will not have that data at all.
To avoid this - you can either maintain the session cache in an external store like Redis or use sticky sessions so that for given session requests are sent to the same pod.
Some pointers to the solution
Using Redis for external session data
Sticky sessions - this approach needs using Nginx as an ingress controller
I have already created a databag item which is existing on the chef server.
Now, I am trying to pass on that databag item secret value to a docker container.
I am creating the data bag as follows:
knife data bag create bag_secrets bag_masterkey --secret-file C:\path\data_bag_secret
I am retrieving value of that databag item in Chef recipe as follows:
secret = Chef::EncryptedDataBagItem.load_secret("#{node['secret']}")
masterkey = Chef::EncryptedDataBagItem.load("databag_secrets", "databag_masterkey", secret)
What logic do i need to add to pass on the data bag secret to a docker container?
I've said this like twice on different questions: DO NOT USE ENCRYPTED DATABAGS LIKE THIS IT IS NOT SAFE.
I think you fundamentally misunderstand the security model of encrypted bags, they exist only to allow for data where the Chef Server cannot read it. The cost is you must manage key distribution. For Docker this would probably be via sidecar containers or data volumes but running chef-client inside a container is relatively rare so you'll have to sort that out yourself. I would recommend working with a security/infosec engineer at your company to figure out the right security model for your usage.
I have a flow3 app that have several namespace, how can i manage to point another domain to another specific namespace ?
As far as I know, you cannot. Not directly at least. Since the FLOW routing system is ignorant of domains, there currently is no domain-switch.
What I did, was route all requests to the main package and have a routine that runs before every request (ie. in MyAbstractController->initializeAction()).
It checks if the domain of the request is not for the main package and in that case, it would trigger an internal redirect() to the other Controller/Package.
Running a Rails application on multiple servers (~20), I want to able to manage the configurartion files (mainly *.yml, but also SSL pem/certs files and other text based) from a single location such that any change to files, or a new file, is added to all servers.
I also want to have this content source controller via git.
Updated are not frequent and I want to keep the app untouched such that data is read from files as it is right now.
What are the available solution for that, is Zookeeper good fit?
I have not used Zookeeper but I believe you should be able to do something like you need with a tool such as Puppet or Chef.
We're using ZooKeeper for live settings.
One idea is to use a registry.
Say you have a component called Arst.
You can have some config - lets say for redis under these folders each representing a different instance:
/dbs/redis/0 (host, port, db, password as children)
/dbs/redis/1 (host, port, db, password as children)
/dbs/redis/prod (host, port, db, password as children)
And if your component Arst needs to use instance 0, you can have a registry like this:
/arst/redis/0
If you want to add 1 just add the node and a child watch in the application will update things for you without a restart.
It's not very simple to do though and managing the settings can be a pain for teams like qa.
So I'll be working on a console to help with this as well. We'll be open sourcing some pieces.