I successfully configured jupyterhub on google cloud using very basic DummyAuthenticator and Docker Spawner following this tutorial: https://github.com/jupyterhub/jupyterhub-deploy-docker
Everything is ok, but when the user logouts its docker container is still running. I was expecting that the container will be stopped when it is unused. It is a waste of resources for my taste. Is there any chance to trigger that behavior?
I used this from the default configuration file jupyterhub generated.
Shuts down all user servers on logout
c.JupyterHub.shutdown_on_logout = True
Culling should be used to shutdown inactive servers while the user is still logged in.
I don't think JupyterHub automatically deletes any services just by logging out.
But you can use Cull-Idle.
It provides a script to cull and shut down idle single-user notebook servers. And its pretty easy to use.
Link :
https://github.com/jupyterhub/jupyterhub/tree/master/examples/cull-idle
Related
I have deployed a owin hosted web applciation in AKS(windows nodepool). The container is in running state but I am not able to hit the application. There might be runtime exceptions or errors but I am not able to figure out the path where I can see such errors in AKS Windows node.
Please help me.
So, the ideal way here would be to use kubctl logs (which goes to Monitor, if you have that enabled). However, Windows containers don't pass on its logs to stdout by default. You have to use Log Monitor for that. Essentially, you have to enable Log Monitor on your container image to be able to get the logs out of the container just like you do with Linux containers. I blogged about it here: https://techcommunity.microsoft.com/t5/itops-talk-blog/troubleshooting-windows-containers-apps-on-azure-kubernetes/ba-p/3269767
The other thing you can try is to use kubectl exec to run a command inside the container and get its output.
I built a Debian 10 sandbox container where I could play with my code bases. I added a user to maintain a permission wall between sudo commands typed too quickly with
USER ELDUDE
in the Dockerfile. However, when I build the container with that user as default login option VS Code fails to connect to the container with
Error: stream ended with:124 but wanted 1128865906
This is the initial error that brought me down this path.
If I do not specify the default user in the Dockerfile I can connect fine but VSCode server in the container runs as root. Now, that becomes a mess with file permissions etc if I start working on stuff.
Any idea what is going on? Can I set user names when I log into a container?
I also thought of enabling SSH on the container and connect through VS SSH but that failed thus far...
I have a running keycloak 8's docker but whenever I restart it, all non-offline session disappears. Result, all users are being disconnected whenever I come to update keycloak.
Causes:
I've read this thread here and understood why access token aren't persisted (mainly performance issue).
As solution I've wanted to use Clusters (as recommended here), and I understood, that the core part is only well managing Infinispan.
Ideas:
I first wanted to store that infinispan outside docker container (in a volume), then search where does the JBoss saves Infinispan in a docker, but i didn't found anything.
Secondly I thought about an SPI to manage user sessions externally, but it seems not to be the right solution, as infinispan does already a good Job.
Setting up then a cluster, helped by this article about Cross-Datacenter support in Keycloak and this other one about Keycloak Cross Data Center Setup in AWS seemed to be a good starting point, but I'm still actually using dockers and I not sure if it's a better idea for me to build docker images from those tutorials.
Any more Idea would be welcome :)
Just yet I've tried using docker cluster a second time, but now using docker swarm with the info from here:
The PING discovery protocol is used by default in udp stack (which is used by default in standalone-ha.xml). Since the Keycloak image runs in clustered mode by default, all you need to do is to run it:
docker run jboss/keycloak
If you run two instances of it locally, you will notice that they form a cluster.
I've deployed very simply 3 instances of keycloak in clustered mode with an external database (postgres) using docker stack and it worked well.
Simpler said, keycloak docker does already handle this use-case when using clusters.
For more about the cluster use-case, please refer to this tutorial about how to setup Keycloak Cluster
Im using docker with my Web service.
when I deploy using Docker, loosing some logging files (nginx accesslog, service log, system log.. etc)
Cause, docker deployment system using down and up container architecures.
So I thought about this problem.
LoggingServer and serviceServer(for api) must seperate!
using these, methods..
First, Using logstash(in elk)(attaching all my logFile) .
Second, Using batch system, this batch system will moves logfiles to otherServer on every midnight.
isn't it okay?
I expect a better answer.
thanks.
There are many ways for logging which most the admin uses for containers
1 ) mount log directory to host , so even if docker goes up/down logs will be persisted on host.
2) ELK server, using logstash/filebeat for pushing logs to elastic search server with tailing option of file, so if new log contents it pushes to server.
3) if there is application logs like maven based projects, then there are many plugins which pushes logs to server
4) batch system , which is not recommended because if containers dies before mid-night then logs will be lost.
I am using a Docker image to run cassandra with the lucene plugin: https://hub.docker.com/r/cpelka/cassandra-lucene/
I am running this image on the Google container engine.
Everything works fine, except the user management. When I log in with the cassandra/cassandra user, it seems to have no rights. I cannot list users, and I cannot change passwords.
I can access and edit tables fine though, it just does not seem to be a superuser.
Something I read is that I have to enable password auth. I added the setting to my cassandra.yaml, but I cannot restart my cassandra service. Hell, if I run service cassandra stop, it takes a while and stops, and then I can still connect to my DB remotely. I think the docker image runs the Database in ways that I do not understand with my one day of cassandra experience. Any help is appreciated.
Thanks and good day,
Dries