I am using a Docker image to run cassandra with the lucene plugin: https://hub.docker.com/r/cpelka/cassandra-lucene/
I am running this image on the Google container engine.
Everything works fine, except the user management. When I log in with the cassandra/cassandra user, it seems to have no rights. I cannot list users, and I cannot change passwords.
I can access and edit tables fine though, it just does not seem to be a superuser.
Something I read is that I have to enable password auth. I added the setting to my cassandra.yaml, but I cannot restart my cassandra service. Hell, if I run service cassandra stop, it takes a while and stops, and then I can still connect to my DB remotely. I think the docker image runs the Database in ways that I do not understand with my one day of cassandra experience. Any help is appreciated.
Thanks and good day,
Dries
Related
I installed the docker version of AzerothCore with the chromiecraft instructions and it seems to run buttery smooth.
That said, I don't seem to be able to access the databases with SQLyog or HeidiSQL.
How else can I access auth and world tables?
I am familiar with using these tools to open the databases with other projects.
Sorry if this seems basic to others. It does not seem so to me.
Thanks in advance for any help! I'd like to do things such as update realmlist table and export characters so they can survive db updates.
:)
In Docker, usually, different services are isolated, each one in its container, which means that your server should be on a different container than your database.
I have no idea what is chromiecraft, nor I have ever used AzerothCore, but I've checked their guide and they are using docker-compose to launch an array of containers (3 in total), one for each service (auth server, world server, database).
If you followed this guide, you can see that the port for the database is exposed, and is as follows:
So, the address would be localhost:3306 (if you're running it inside your computer, either way replace localhost with your IP), username: root, password: password.
I have a running keycloak 8's docker but whenever I restart it, all non-offline session disappears. Result, all users are being disconnected whenever I come to update keycloak.
Causes:
I've read this thread here and understood why access token aren't persisted (mainly performance issue).
As solution I've wanted to use Clusters (as recommended here), and I understood, that the core part is only well managing Infinispan.
Ideas:
I first wanted to store that infinispan outside docker container (in a volume), then search where does the JBoss saves Infinispan in a docker, but i didn't found anything.
Secondly I thought about an SPI to manage user sessions externally, but it seems not to be the right solution, as infinispan does already a good Job.
Setting up then a cluster, helped by this article about Cross-Datacenter support in Keycloak and this other one about Keycloak Cross Data Center Setup in AWS seemed to be a good starting point, but I'm still actually using dockers and I not sure if it's a better idea for me to build docker images from those tutorials.
Any more Idea would be welcome :)
Just yet I've tried using docker cluster a second time, but now using docker swarm with the info from here:
The PING discovery protocol is used by default in udp stack (which is used by default in standalone-ha.xml). Since the Keycloak image runs in clustered mode by default, all you need to do is to run it:
docker run jboss/keycloak
If you run two instances of it locally, you will notice that they form a cluster.
I've deployed very simply 3 instances of keycloak in clustered mode with an external database (postgres) using docker stack and it worked well.
Simpler said, keycloak docker does already handle this use-case when using clusters.
For more about the cluster use-case, please refer to this tutorial about how to setup Keycloak Cluster
I successfully configured jupyterhub on google cloud using very basic DummyAuthenticator and Docker Spawner following this tutorial: https://github.com/jupyterhub/jupyterhub-deploy-docker
Everything is ok, but when the user logouts its docker container is still running. I was expecting that the container will be stopped when it is unused. It is a waste of resources for my taste. Is there any chance to trigger that behavior?
I used this from the default configuration file jupyterhub generated.
Shuts down all user servers on logout
c.JupyterHub.shutdown_on_logout = True
Culling should be used to shutdown inactive servers while the user is still logged in.
I don't think JupyterHub automatically deletes any services just by logging out.
But you can use Cull-Idle.
It provides a script to cull and shut down idle single-user notebook servers. And its pretty easy to use.
Link :
https://github.com/jupyterhub/jupyterhub/tree/master/examples/cull-idle
On my windows server 2016, I am trying to figure out the run command syntax to run a docker image as a user in my ldap. I read this article, but I am not following it very well (different environments)
Perhaps I am miss understanding the concept all together, but in the end I need to run the container as a specific user in our active directory.
Any links to a well documented run --user examples would be appreciated...
One of the things that is confusing is trying to figure out the UserId and such...
The answer depends on the use case, but may be gMSA authentication would help? Basically, with gMSA authentication, you can add the host OS to an AD domain, and containers running on it can share the privileges to use things like network drive. That way, you don't need to pass credential every time you access them.
MS team has a good write up on it here:
Active Directory Service Accounts for Windows Containers
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
Also, artisticcheese has fantastic walk through.
Enabling integrated Windows Authentication in windows docker container
https://artisticcheese.wordpress.com/2017/09/09/enabling-integrated-windows-authentication-in-windows-docker-container/
Hope this helps.
Im using docker with my Web service.
when I deploy using Docker, loosing some logging files (nginx accesslog, service log, system log.. etc)
Cause, docker deployment system using down and up container architecures.
So I thought about this problem.
LoggingServer and serviceServer(for api) must seperate!
using these, methods..
First, Using logstash(in elk)(attaching all my logFile) .
Second, Using batch system, this batch system will moves logfiles to otherServer on every midnight.
isn't it okay?
I expect a better answer.
thanks.
There are many ways for logging which most the admin uses for containers
1 ) mount log directory to host , so even if docker goes up/down logs will be persisted on host.
2) ELK server, using logstash/filebeat for pushing logs to elastic search server with tailing option of file, so if new log contents it pushes to server.
3) if there is application logs like maven based projects, then there are many plugins which pushes logs to server
4) batch system , which is not recommended because if containers dies before mid-night then logs will be lost.