Note: I've read this and this question. Neither helped.
I've created a Spring config server Docker image. The intent is to be able to run multiple containers with different profiles and search locations. This is where it differs from the questions above, where the properties were either loaded from git or from classpath locations known to the config server at startup. My config server is also deployed traditionally in Tomcat, not using a repackaged boot jar.
When I access http://<docker host>:8888/movie-service/native, I get no content (see below). The content I'm expecting is given at the end of the question.
{"name":"movie-service","profiles":["native"],"label":"master","propertySources":[]}
I've tried just about everything under the sun but just can't get it to work.
Config server main:
#SpringBootApplication
#EnableConfigServer
#EnableDiscoveryClient
public class ConfigServer extends SpringBootServletInitializer {
/* Check out the EnvironmentRepositoryConfiguration for details */
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(ConfigServer.class);
}
}
Config server application.yml:
spring:
cloud:
config:
enabled: true
server:
git:
uri: ${CONFIG_LOCATION}
native:
searchLocations: ${CONFIG_LOCATION}
server:
port: ${HTTP_PORT:8080}
Config server bootstrap.yml:
spring:
application:
name: config-service
eureka:
instance:
hostname: ${CONFIG_HOST:localhost}
preferIpAddress: false
client:
registerWithEureka: ${REGISTER_WITH_DISCOVERY:true}
fetchRegistry: false
serviceUrl:
defaultZone: http://${DISCOVERY_HOST:localhost}:${DISCOVERY_PORT:8761}/eureka/
Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native xxx
Config Directory:
/Users/<some path>/config
--- movie-service.yml
Content of movie-service.yml:
themoviedb:
url: http://api.themoviedb.org
Figured this out myself. The Docker container doesn't have access to the host file system unless the file system directory is mounted at runtime using a -v flag.
For the sake of completeness, this is the full working Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native \
-v "/Users/<some path>/config/:/Users/<some path>/config/" \
xxx
Related
I used the configuration from: enabling oauth2 with pgadmin and gitlab
The main difference is, i have a local gitlab setup at https://gitlab_company_org
and a local (dockered) pgadmin instance at http://pgadmin_projectx_company_org:8000
But i get the error: {"success":0,"errormsg":"Missing \"jwks_uri\" in metadata","info":"","result":null,"data":null} when i try to login.
So my configs are:
config_local.py:
AUTHENTICATION_SOURCES = ['oauth2', 'internal']
MASTER_PASSWORD = True
OAUTH2_CONFIG = [
{
'OAUTH2_NAME': 'gitlab',
'OAUTH2_DISPLAY_NAME': 'Gitlab',
'OAUTH2_CLIENT_ID': 'gitlab_client_id',
'OAUTH2_CLIENT_SECRET': 'gitlab_client_secret',
'OAUTH2_TOKEN_URL': 'https://gitlab_company_org/oauth/token',
'OAUTH2_AUTHORIZATION_URL': 'https://gitlab_company_org/oauth/authorize',
'OAUTH2_API_BASE_URL': 'https://gitlab_company_org/oauth/',
'OAUTH2_USERINFO_ENDPOINT': 'userinfo',
'OAUTH2_SCOPE': 'openid email profile',
'OAUTH2_ICON': 'fa-gitlab',
'OAUTH2_BUTTON_COLOR': '#E24329',
}
]
OAUTH2_AUTO_CREATE_USER = True
run_pgadmin.sh
mkdir -p ./pgadmin
mkdir -p ./pgadmin/data
touch ./pgadmin/config_local.py
chown -R 5050:5050 ./pgadmin
docker stop pgadmin
docker rm pgadmin
docker pull dpage/pgadmin4
docker run -p 8000:80 \
--name pgadmin \
-e 'PGADMIN_DEFAULT_EMAIL=pgadmin#company.org' \
-e 'PGADMIN_DEFAULT_PASSWORD=somesupersecretsecret' \
-e 'PGADMIN_CONFIG_LOGIN_BANNER="Authorised users only!"' \
-v /opt/container/pgadmin/data:/var/lib/pgadmin \
-v /opt/container/pgadmin/config_local.py:/pgadmin4/config_local.py:ro \
-d dpage/pgadmin4
When trying to login via gitlab button i get the gitlab login, then i allowed the app to login via gitlab, but afterwards i get the error: {"success":0,"errormsg":"Missing \"jwks_uri\" in metadata","info":"","result":null,"data":null} .. which seems a json response to: http://pgadmin.projectx.company.org:8000/oauth2/authorize?code=VERYLONGCODE&state=SOMEOTHERKINDOFCODE
Solution:
Thanks to Aditya Toshniwal: i tried the new dpage/pgadmin4:snapshot or 2023-01-09-2 tag on dockerhub, and had to add the OAUTH2_SERVER_METADATA_URL parameter (value: https://gitlab_company_org/oauth/.well-known/openid-configuration), which i found in the issue he mentioned, now the thing works with gitlab onprem. Awesome!
The issue is fixed - https://github.com/pgadmin-org/pgadmin4/issues/5666 and will be available in pgAdmin release coming this week. You can also try on the candidate build here - https://developer.pgadmin.org/builds/2023-01-09-2/
Hi I use Kafka connect docker container image confluentinc/cp-kafka-connect 5.5.3 and everything was running fine when using follwing Parameters
...
-e "CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
...
Now we introduced Schema Registry and decided to go with JsonSchemaConverter for now and not avro. I changed follwoing (INTERNAL stays as it is for now)
...
-e "CONNECT_KEY_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_VALUE_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
-e "CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
...
Following Error appeared:
[2021-02-04 09:24:14,637] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.json.JsonSchemaConverter for configuration key.converter: Class io.confluent.connect.json.JsonSchemaConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:727)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:473)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:374)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:316)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
It seems the converter is not available here by default. Do I have to install JsonSchemaConverter? I thought it comes by default?
I am using Couchbase java client SDK 2.7.9 and am running into a problem while trying to run automated integration tests. In such test we usually use random ports to be able to run the same thing on the same Jenkins slave (using docker for example).
But, with the client, we can specify many custom ports but not the 8092, 8093, 8094 and 8095.
The popular TestContainers modules mention as well that those port have to remain static in their Couchbase module: https://www.testcontainers.org/modules/databases/couchbase/ 1
Apparently it is also possible to change those ports at the server level.
Example:
Docker-compose.yml
version: '3.0'
services:
rapid_test_cb:
build:
context: ""
dockerfile: cb.docker
ports:
- "8091"
- "8092"
- "8093"
- "11210"
The docker image is ‘couchbase:community-5.1.1’
Internally the ports are the ones written above but externally they are random. At the client level you can set up bootstrapHttpDirectPort and bootstrapCarrierDirectPort but apparently the 8092 and 8093 ports are taken from the server-side (who does not know which port was assigned to him).
I would like to ask you whether it is possible to change those ports at the client level and, if not, to seriously consider adding that feature.
So, as discussed with the Couchbase team here,
it is not really possible. So we found a way to make it work using Gradle's docker compose plugin but I imagine it would work in different situations (TestContainer could use a similar system).
docker-compose.yml:
version: '3.0'
services:
rapid_test_cb:
build:
context: ""
dockerfile: cb.docker
ports:
- "${COUCHBASE_RANDOM_PORT_8091}:${COUCHBASE_RANDOM_PORT_8091}"
- "${COUCHBASE_RANDOM_PORT_8092}:${COUCHBASE_RANDOM_PORT_8092}"
- "${COUCHBASE_RANDOM_PORT_8093}:${COUCHBASE_RANDOM_PORT_8093}"
- "${COUCHBASE_RANDOM_PORT_11210}:${COUCHBASE_RANDOM_PORT_11210}"
environment:
COUCHBASE_RANDOM_PORT_8091: ${COUCHBASE_RANDOM_PORT_8091}
COUCHBASE_RANDOM_PORT_8092: ${COUCHBASE_RANDOM_PORT_8092}
COUCHBASE_RANDOM_PORT_8093: ${COUCHBASE_RANDOM_PORT_8093}
COUCHBASE_RANDOM_PORT_11210: ${COUCHBASE_RANDOM_PORT_11210}
cb.docker:
FROM couchbase:community-5.1.1
COPY configure-node.sh /opt/couchbase
#HEALTHCHECK --interval=5s --timeout=3s CMD curl --fail http://localhost:8091/pools || exit 1
RUN chmod u+x /opt/couchbase/configure-node.sh
RUN echo "{rest_port, 8091}.\n{query_port, 8093}.\n{memcached_port, 11210}." >> /opt/couchbase/etc/couchbase/static_config
CMD ["/opt/couchbase/configure-node.sh"]
configure-node.sh:
#!/bin/bash
poll() {
# The argument supplied to the function is invoked using "$#", we check the return value with $?
"$#"
while [ $? -ne 0 ]
do
echo 'waiting for couchbase to start'
sleep 1
"$#"
done
}
set -x
set -m
if [[ -n "${COUCHBASE_RANDOM_PORT_8092}" ]]; then
sed -i "s|8092|${COUCHBASE_RANDOM_PORT_8092}|g" /opt/couchbase/etc/couchdb/default.d/capi.ini
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_8091}" ]]; then
sed -i "s|8091|${COUCHBASE_RANDOM_PORT_8091}|g" /opt/couchbase/etc/couchbase/static_config
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_8093}" ]]; then
sed -i "s|8093|${COUCHBASE_RANDOM_PORT_8093}|g" /opt/couchbase/etc/couchbase/static_config
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_11210}" ]]; then
sed -i "s|11210|${COUCHBASE_RANDOM_PORT_11210}|g" /opt/couchbase/etc/couchbase/static_config
fi
/entrypoint.sh couchbase-server &
poll curl -s localhost:${COUCHBASE_RANDOM_PORT_8091:-8091}
# Setup index and memory quota
curl -v -X POST http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/pools/default --noproxy '127.0.0.1' -d memoryQuota=300 -d indexMemoryQuota=300
# Setup services
curl -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/node/controller/setupServices --noproxy '127.0.0.1' -d services=kv%2Cn1ql%2Cindex
# Setup credentials
curl -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/settings/web --noproxy '127.0.0.1' -d port=${couchbase_random_port_8091:-8091} -d username=Administrator -d password=password
# Load the rapid_test bucket
curl -X POST -u Administrator:password -d name=rapid_test -d ramQuotaMB=128 --noproxy '127.0.0.1' -d authType=sasl -d saslPassword=password -d replicaNumber=0 -d flushEnabled=1 -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/pools/default/buckets
fg 1
Gradle's docker compose configuration:
def findRandomOpenPortOnAllLocalInterfaces = {
new ServerSocket(0).withCloseable { socket ->
return socket.getLocalPort().intValue()
}
}
dockerCompose {
environment.put 'COUCHBASE_RANDOM_PORT_8091', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_8092', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_8093', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_11210', findRandomOpenPortOnAllLocalInterfaces()
}
integTest.doFirst {
systemProperty 'com.couchbase.bootstrapHttpDirectPort', couchbase_random_port_8091
systemProperty 'com.couchbase.bootstrapCarrierDirectPort', couchbase_random_port_11210
}
I am unable to run 2 or more models via TensorFlow Serving via docker on a Windows 10 machine.
I have made a models.config file
model_config_list: {
config: {
name: "ukpred2",
base_path: "/models/my_models/ukpred2",
model_platform: "tensorflow"
},
config: {
name: "model3",
base_path: "/models/my_models/ukpred3",
model_platform: "tensorflow"
}
}
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\temp\models\,target=/models/my_models --mount type=bind,source=C:\Users\th3182\Documents\temp\models.config,target=/models/models.config -t tensorflow/serving --model_config_file=/models/models.config
In C:\Users\th3182\Documents\temp\models are 2 folders ukpred2 and ukpred3 in these folders are the exported folders from the trained models ie 1536668276 which contains an assets folder a variables folder and a saved_model.ph file.
The error I get is
2018-09-13 15:24:50.567686: I tensorflow_serving/model_servers/main.cc:157] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
2018-09-13 15:24:50.568209: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-09-13 15:24:50.568242: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: model
2018-09-13 15:24:50.568640: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable model
I can't seem to get this to work with the alterations on the above. But I have managed to server a single model with the following command
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\projects\Better_Buyer2\model2\export\exporter,target=/models/model2 -e MODEL_NAME=model2 -t tensorflow/serving
You'll have to wait for the next release (1.11.0) for this to work. In the interim, you can use the image tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0
In tensorflow serving 2.6.0, Model Server Config Details for multiple models:
model_config_list {
config {
name: 'my_first_model'
base_path: '/tmp/my_first_model/'
model_platform: 'tensorflow'
}
config {
name: 'my_second_model'
base_path: '/tmp/my_second_model/'
model_platform: 'tensorflow'
}
}
Example: Run multiple models using tensorflow/serving
docker run -p 8500:8500 \
-p 8501:8501 \
--mount type=bind,source=/tmp/models,target=/models/my_first_model \
--mount type=bind,source=/tmp/models,target=/models/my_second_model \
--mount type=bind,source=/tmp/model_config,\
target=/models/model_config \
-e MODEL_NAME=my_first_model \
-t tensorflow/serving \
--model_config_file=/models/model_config
For more information please refer Model Server Configuration
The Readme on https://github.com/swagger-api/swagger-ui specifies that Swagger-UI can be run with your own file like this
docker run -p 80:8080 -e SWAGGER_JSON=/foo/swagger.json -v /bar:/foo swaggerapi/swagger-ui
which works I if I translate it to
docker build . -t swagger-ui-local && \
docker run -p 80:8080 -e SWAGGER_JSON=/foo/my-file.json -v $PWD:/foo swagger-ui-local
This, however, ignores my local changes.
I can run my local changes with
npm run dev
but I can't figure out how to get this dev server to run anything else than the Petstore example.
Can anyone help me combine the two, so I can run swagger-ui with local code changes AND my own swagger.json?
Make sure you are volume mounting the correct local directory.
Locally, I had my swagger config in $PWD/src/app/swagger/swagger.yaml. Running the following worked fine:
docker run -p 80:8080 -e SWAGGER_JSON=/tmp/swagger.yaml -v `pwd`/src/app/swagger:/tmp swaggerapi/swagger-ui
Simply refreshing the Swagger-UI page or clicking the "Explore" button in the header triggered a refresh of the data from my YAML file.
You can also specify BASE_URL excerpt from swagger-installation
docker run -p 80:8080 -e BASE_URL=/swagger -e SWAGGER_JSON=/foo/swagger.json -v /bar:/foo swaggerapi/swagger-ui
I found this topic because I wanted to see a visual representation of my local swagger file, but could not seem to get swagger-ui (running in docker) to display anything other than the petstore.
Ultimately, my issue was with understanding the -e SWAGGER_JSON and -v flags, so I wanted to explain them here.
-v <path1>:<path2>
This option says "Mount the path <path1> from my local file system within the swagger-ui docker container on path <path2>"
-e SWAGGER_JSON=<filepath>
This option says "By default, show the swagger for the file at <filepath> using the docker container's file system." The important part here, is that this filepath should take into account how you set <path2> above
Putting it all together, I ended up with the following:
docker run -p 8085:8080 -e SWAGGER_JSON=/foo/swagger.json -v `pwd`:/foo swaggerapi/swagger-ui
This says in english: "Run my swagger-ui instance on port 8085. Mount my current working directory as '/foo' in the docker container. By default, show the swagger file at '/foo/swagger.json'."
The important thing to note is that I have a file called swagger.json in my current working directory. This command mounts my current working directory as /foo in the docker container. Then, swagger UI can pick up my swagger.json as /foo/swagger.json.
Here's how I ended up solving this, it also allows you to have multiple YML files:
docker run -p 80:8080 \
-e URLS_PRIMARY_NAME=FIRST \
-e URLS="[ \
{ url: 'docs/first.yml', name: 'FIRST' } \
, { url: 'docs/second.yml', name: 'SECOND' } \
]" \
-v `pwd`:/usr/share/nginx/html/docs/ \
swaggerapi/swagger-ui
I figured it out for npm run dev:
Place my-file.json in the dev-helpers folder. Then it's available from the search bar in on http://localhost:3200/.
To load it automatically when opening the server, alter dev-helpers/index.html by changing
url: "http://petstore.swagger.io/v2/swagger.json"
to
url: "my-file.json"
Just in case you are running a maven project with Play Framework the following steps solved my issue :
1.) Alter the conf/routes file. Add the below line :
GET /swagger.json controllers.Assets.at(path="/public/swagger-ui",file="swagger.json")
2.) Add the swagger.json file to your Swagger-UI folder
so when you run the mvn project in a port example 7777, start the play server using mvn play2:run and then, localhost:7777/docs will automatically pull the Json file that is added locally.
Docker compose solution:
create .env file and add the following:
URLS_PRIMARY_NAME=FIRST
URLS=[ { url: 'docs/swagger.yaml', name: 'FIRST' } ]
And create a docker-compose file with contents below:
version: "3.3"
services:
swagger-ui:
image: swaggerapi/swagger-ui
container_name: "swagger-ui"
ports:
- "80:8080"
volumes:
- /local/tmp:/usr/share/nginx/html/docs/
environment:
- URLS_PRIMARY_NAME=${URLS_PRIMARY_NAME}
- URLS=${URLS}
the swagger.yaml is at /local/tmp.
For people facing this issue in mac, its a permission problem. By default after Catalina, docker doesn't have permission to allow its images to read local files in your system. Once its given it worked for me and it took my local swagger json file.
To grant privileges now, go to System preferences > Security & Privacy > Files and Folders, and add Docker for Mac and your shared directory.
Another solution if you want to provide multiple URLs and from a specific folder (not default /usr/share/nginx/html/docs/):
docker run -p 80:8080 \
-e SWAGGER_JSON=/docs/api.yaml \
-e URLS="[ \
{ url: '/api1.yaml', name: 'API 1' }, \
{ url: '/api2.yaml', name: 'API 2' } \
]" \
-v `pwd`/docs:/docs \
swaggerapi/swagger-ui
Or for docker compose:
version: '3.8'
services:
swagger-ui:
image: swaggerapi/swagger-ui
volumes:
- ./docs:/docs
environment:
SWAGGER_JSON: /docs/api.yaml
URLS: '[{ url: "/api1.yaml", name: "API 1" }, { url: "/api2.yaml", name: "API 2" }]'
Please note, SWAGGER_JSON requires an absolute path, URLs in URLS require relative paths from the specified volume