I'm a newbie with docker, today I'm trying to start my docker container with keycloak without success, I haven't made any change to the container and it just doesn't want to start up.
Here is the docker log error:
*** JBossAS process (188) received TERM signal ***
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
The container is not inside any volume, and it was created using the command
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:16.1.0
I tried it as well and the user account was not created. I believe that is as a result of the fact that the default 16.x.x and below are all based on wildfly and not quarkus. The new quarkus version supports these environment variables for setting up the initial admin user and is the .x preview version from 16.x.x and below.
It is only from 17.x.x onwards that quarkus is fully supported in the default version and is no longer a .x preview version. Link here
I tested this hypothesis by running the same command but only changing the version of keycloak to 17.x.x and adding the state the server should run in and that run fine. The documentation for this is here
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:17.0.0 start-dev
Also note that the admin login is now at http://localhost:8080/admin instead of http://localhost:8080/auth in the new version.
Related
I'm using the Azure AppService version of WordPress. This version uses the wordpress-alpine-php docker image, running nginx version 1.20.2.
I need to run a startup script. I've followed all the documentation but it's not working.
I added the script as a startup command on the general settings tab of the configuration blade.
UPDATE:
There is no error in the docker logs.
In the log stream, I see it applies this startup command to the docker command (at the end).
2023-01-28T15:47:13.502Z INFO - docker run -d --expose=80 --name
doublekblog_53_90b5e145 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=true -e
WEBSITE_SITE_NAME=doublekblog -e WEBSITE_AUTH_ENABLED=False -e PORT=80
-e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=doublekblog.azurewebsites.net -e
WEBSITE_INSTANCE_ID=a6aeecea0459d3f037d6e8e066c862b9ee22384acef65d19b8cae7b67921b742
-e HTTP_LOGGING_ENABLED=1 -e WEBSITE_USE_DIAGNOSTIC_SERVER=False mcr.microsoft.com/appsvc/wordpress-alpine-php:stage3
/home/site/repository/movedefaultconf.sh
This script never runs.
The script works when I run it manually. As a test, I changed the script to create a file and that didn't work. I even updated the startup command in azure to only touch a file and that didn't seem to be applied.
Please check that the environment variables are correctly set, that the image version exists and the ports are reachable and open. Also, you should check the logs of the container and Azure App Service for more information.
This is for your reference:
https://geeksnewslab.com/the-azure-app-service-startup-command-issue-while-using-the-docker-container/
I am running a version 4.0.0 neo4j docker image. The initial username/password combination is neo4j/test. I update the password while connected to the system database, as well as via the webapp using the following command :server change-password. Everytime i stop the container (I use it for local development, so when i shutdown my laptop, the container is shutdown as well), the password is reset. How do I make the password survive restarts of the container?
You just need to use volume (--volume n4j:/data). Volumes are the way of persistence.
I tried to run the following :
docker run \
-p 7474:7474 \
-p 7687:7687 \
--volume n4j:/data \
--name neo -d neo4j:4.0.0
i navigate to http://localhost:7474
i login with neo4j/neo4j
i changed the password from neo4j to neo4j123
I stop the container : docker stop neo
i deleted the container : docker rm neo
but see... the volume 'n4j' is still there docker volume ls .
now, i will run new container with the same volume.
navigate to http://localhost:7474
authenticate with the neo4j123 , it works!!
I'm trying to get started with Hasura GraphQL engine running locally on OSX in Docker and connecting to an existing database but I am having trouble finding the container or the Hasura console.
Here's what I have:
docker -v
Docker version 19.03.5, build 633a0ea
docker-compose -v
docker-compose version 1.25.4, build 8d51620a
docker images
hasura/graphql-engine v1.0.0
hasura version
INFO hasura cli version=v1.0.0
Here's my start script (docker-run.sh) which sets up the port and environment variables for Hasura:
#!/bin/bash
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://someuser:somepassword#host.docker.internal:5432/somedb \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
Running ./docker-run.sh returns a 64 char hex string, which I assume to be the container ID, but I cannot see a container when I run docker ps, and nothing loads on http://localhost:8080/console.
What am I missing?
UPDATE 1
I can see the container when I run docker ps -a - it has a status of exited(1) (which means application error).
I can see in the logs:
{"path":"$","error":"pgcrypto extension is required, but the current user doesn’t have permission to create it. Please grant superuser permission, or setup the initial schema via https://docs.hasura.io/1.0/graphql/manual/deployment/postgres-permissions.html","code":"postgres-error"}
I have followed the instructions for setting up the initial schema but the result of running ./docker-run.sh has not changed.
UPDATE 2
I did not realise that the pgcrypto extension had to be installed on the specific database. Now that I have done so, the logs look healthy - although I am still unable to access the console when I run hasura console.
Here's my config.yaml:
endpoint: http:localhost:8080
...and the resulting error:
FATA[0001] version check: failed to get version from server: failed making version api call: Get http:localhost:8080/v1/version: http: no Host in request URL
Again, what am I missing?
UPDATE 3
Changed config.yaml...
endpoint: http://localhost:8080
Whoops (blush).
OK, it's working :)
I am trying to run Airflow Webserver on App Engine Flexible however for it to work I need a mounted GCS bucket. I am using custom runtime.
The reason why I am doing it is to get a secured endpoint that app Engine provides together with IAP.
My app.yaml is a simple file with service name, env and runtime
My Dockerfile is a lots of apt-get installs and in CMD there is gcsfuse mounting and running airflow webserver, it is not a big deal.
The error I am getting when trying to use gcsfuse in App Engine is:
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
I know that Google Composer exists but it is way too expensive for my needs. So I prefer to create a VM with a scheduler and webserver on GAE, sharing a GCS bucket, similar to what Composer gives but without all that HA and insane cost for simple things I want to run.
I am searching to do this in App Engine, all the answers I have found so far mention GKE for some reason.
I know it is a privilege problem, however in App Engine I do not see any option to set privileges, a way to do it would be very helpful.
Is is even possible to do what I want to do on App Engine?
This is possible. I'll show you how to do it manually, you might need to utilize shell script to deal with multiple instances.
define several vars used in this manual
service=YOUR_APPENGINE_VERSION
version=YOUR_APPENGINE_VERSION
project=PROJECTID
get instance list
gcloud app instances list --project $project
SERVICE VERSION ID VM_STATUS DEBUG_MODE
default *************** instance-id-1 RUNNING YES
default *************** instance-id-2 RUNNING
ssh into one instance
gcloud app instances ssh instance-id-1 --service $service --version $version --project $project
get image id
docker ps | grep gaeapp | awk '{print $2}'
you will get an imageid
get env of gaeapp
docker exec gaeapp env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=*****
GAE_MEMORY_MB=614
GAE_INSTANCE=****
GAE_SERVICE=default
PORT=8080
GCLOUD_PROJECT=*****
GAE_VERSION=*****
GOOGLE_CLOUD_PROJECT=*****
restart gaeapp with privilege
docker rm -f gaeapp
docker run --privileged -d -p 8080:8080 --name gaeapp -e GAE_MEMORY_MB=614 -e GAE_INSTANCE=instance-id-1 -e GAE_SERVICE=$service -e PORT=8080 -e GCLOUD_PROJECT=$project -e GAE_VERSION=$version -e GOOGLE_CLOUD_PROJECT=$project $imageid
enter gaeapp(assume you have gcsfuse installed and have service account key json: /test-service-account.json)
$ docker exec -it gaeapp bash
[in gaeapp] # GOOGLE_APPLICATION_CREDENTIALS=/test-service-account.json gcsfuse BUCKET /mnt/
Using mount point: /mnt
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
To be honest, I have tried all possible solutions. and finally the above solution worked. Unfortunately, it worked for 2-3 days only. After sometime, App Engine restarts the instances automatically, without any failure in app. Therefore all changes for gcsfuse got disappeared.
Main thing for gcsfuse to work in container is to run the docker image in priviliged mode. And App Engine doesnot allow that
The final solution that we are using is GKE which is working fine.
Note: It was expected that GAE should have some provision for privileged mode, but it doesnot have now. In future Google Team may introduce it. Thanks!
I am trying to create a rabbitmq docker container with default user and password but when I try to enter to the management plugin those credentials doesn't work
This is how I create the container:
docker run -d -P --hostname rabbit -p 5009:5672 -p 5010:15672 --name rabbitmq -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=pass -v /home/desarrollo/rabbitmq/data:/var/lib/rabbitmq rabbitmq:3.6.10-management
What am I doing wrong?,
Thanks in advance
The default user is created only if the database does not exist. Therefore the environment variables have no effect if the volume already exists.
I had the same problem when trying to access in Chrome. Firefox worked fine. The culprit turned out to be a deprecated JS method that was no longer allowed by Chrome.