I tried to implement user impersonation with Keycloak but I got this error
"error": "Feature not enabled"
This image shows what I ran in Postman and the error:
To start keycloak I ran Docker, on Windows 10 and then this command:
docker run -p 8080:8080 -e KEYCLOAK_PASSWORD=admin123 -e KEYCLOAK_USER=admin -e DB_VENDOR=H2 jboss/keycloak
so I use jBoss docker image, from RedHat.
So I wanted to enable that missing feature in keycloak, but from keycloak documentation I can't understand where to run this specific command:
For example, to enable docker and token-exchange, enter this command:
bin/kc.[sh|bat] build --features=docker,token-exchange
to have, for example, this token-exchange feature available in keycloak.
I tried to find into jBoss this kc file to run that command but I didn't find it. I found first the jBoss image:
docker exec 42f1c5c8bf55 it bash
then I enter on jboss
sh-4.4$ cd /opt/jboss
sh-4.4$ find . -name "kc.sh"
find: ‘./proc/tty/driver’: Permission denied
find: ‘./var/cache/ldconfig’: Permission denied
find: ‘./lost+found’: Permission denied
sh-4.4$ find . -name "kc.*"
find: ‘./proc/tty/driver’: Permission denied
find: ‘./var/cache/ldconfig’: Permission denied
find: ‘./lost+found’: Permission denied
I searched a lot and I tried different solutions, but non of them worked.
Anyone please give me a little help or at least an ideea how to implement a new feature, like token-exchange or access_token, inside keycloak.
You can use the KC_ prefixed environment variables in your Docker container. For example, to enable features:
docker run -p 8080:8080 -e KEYCLOAK_PASSWORD=admin123 -e KEYCLOAK_USER=admin -e KC_FEATURES=token-exchange -e DB_VENDOR=H2 jboss/keycloak
Note that the jboss/keycloak image is not the current official Keycloak image anymore. You probably want to migrate to the quay.io/keycloak/keycloak images (see the Keycloak Docker docs).
You can enable features using env var JAVA_OPTS_APPEND environment variable
for example to enable Ability for admins to impersonate users just start the container like this:
docker run -p 8080:8080 -e KEYCLOAK_PASSWORD=admin123 -e KEYCLOAK_USER=admin -e DB_VENDOR=H2 -e JAVA_OPTS_APPEND="-Dkeycloak.profile.feature.impersonation=enabled" jboss/keycloak
Related
I have Ubuntu 22.04 and run next command:
docker run -d mypostgres -e POSTGRES_PASSWORD=1111 postgres -c shared_buffers=256MB -c max_connections=200
and I got following answer:
Unable to find image 'mypostgres:latest' locally
docker: Error response from daemon: pull access denied for mypostgres, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
So what is the correct name for 'mypostgres'?
Can I write here the occasional name that I want?
You should use postgres instead if you want to download the image from dockerhub. If you want specific version you can use tags provided on the dockerhub page ie. postgres:14.5
What you are missing here is --name switch before mypostgres You can use --name switch to name your container
Full command:
docker run -d --name mypostgres -e POSTGRES_PASSWORD=1111 postgres -c shared_buffers=256MB -c max_connections=200
--name before mypostgres , because docker understood that you want an image called mypostgres not a postgres images. and didn't find an official image with that name.
I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).
I am following a tutorial and using sqlc in my project. However, it's weird that I seem to mount an empty volume. After checking another post mounting the host directory, I found docker creates another empty folder, confirming that I did something wrong about it. Docker documentation doesn't help resolve this issue. Currently, my command with bash terminal:
docker run --rm -v $(pwd)://src -w //src kjconroy/sqlc init
docker run --rm -v $(pwd)://src -w //src kjconroy/sqlc generate
The first command runs successfully but creates another empty folder. The built container is running, and it's path is: \\wsl$\docker-desktop-data\data\docker\volumes on my Windows 10. However, the folder structure is different from the tutorial when I download the desktop docker, so I'll add extra information about how I construct the setting. The construction is using Make with docker:
postgres:
docker run --name postgreslatest -p 5432:5432 -e POSTGRES_USER=root -e POSTGRES_PASSWORD=secret -d postgres
createdb:
docker exec -it postgreslatest createdb --username=root --owner=root simple_bank
dropdb:
docker exec -it postgreslatest dropdb simple_bank
migrateup:
migrate -path db/migration -database "postgresql://root:secret#localhost:5432/simple_bank?sslmode=disable" -verbose up
migratedown:
migrate -path db/migration -database "postgresql://root:secret#localhost:5432/simple_bank?sslmode=disable" -verbose down
.PHONY: postgres createdb dropdb migrateup migratedown
Any help is appreciated.
I got it working. First of all, I still have no idea why bash command cannot correctly locate the sqlc.yaml file. However, under Windows 10 OS, I succeeded locate and generating files with the command provided by docs.
The command is: docker run --rm -v "%cd%:/src" -w /src kjconroy/sqlc generate using ONLY CMD and the command also works combined with the MakeFile.
I've ran this command
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.6.10
and then up the docker by this command
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:5.6.10
But seems this version will set an default user name & password:
elastic:changeme
Do anyone how to remove the username password?
It shouldn't until, you enable the x-pack basic security, can you share how its asking for the username and password and share the startup logs by using below command.
docker logs <your-es-conatiner-id>
Edit: I tried it myself and saw x-pack plugin is loaded as shown in below log
[2020-08-27T10:50:24,913][INFO ][o.e.p.PluginsService ] [IioZz2W] loaded plugin [ingest-user-agent]
[2020-08-27T10:50:24,913][INFO ][o.e.p.PluginsService ] [IioZz2W] loaded plugin [x-pack]
Also its asking me same username/password, so you need to disable the x-pack while running the docker conatiner.
Adding -e "xpack.security.enabled=false" in your docker run command will fix the issue.
So your run command will be like below
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.6.10
I'm trying to run a simple-ca container (https://github.com/jcmoraisjr/simple-ca) in docker (running in a fresh install of ubuntu 18.04.4) but every time I run the docker command
docker run --name simple-ca -p 80:8080 -p 443:8443 -e CERT_TLS_DNS=ca.mycompany.com -e CA_CN=MyCompany-CA -v /var/lib/simple-ca/ssl:/ssl quay.io/jcmoraisjr/simple-ca:latest
I get the error
chmod: private: Operation not permitted
I have already granted systemd-network ownership of the folder /var/lib/simple-ca/ and ran the command
/bin/bash -c 'chown $(docker run --rm quay.io/jcmoraisjr/simple-ca id -u lighttpd) /var/lib/simple-ca/ssl'
to grant lighthttpd rights on the SSL folder
Anyone have any idea on what may have went wrong?
It was a permission issue that caused the container to fail to start. I had to remove the folder /var/lib/simple-ca and let the systemd startup re-create it with Lighttpd user id.
Once that was done, I used the container console access to directly edit the CA config files and the whole simple-ca was working as expected.