I have my file docker-compose.yml :
otel-collector:
image: otel/opentelemetry-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP http receiver
- "55679:55679" # zpages extension
I see this error after execution of docker compose up:
otel-collector | Error: failed to get config: cannot resolve the
configuration: cannot retrieve the configuration: unable to read the
file file:/etc/otel-collector-config.yaml: open
/etc/otel-collector-config.yaml: permission denied otel-collector |
2022/01/09 11:15:47 collector server run finished with error: failed
to get config: cannot resolve the configuration: cannot retrieve the
configuration: unable to read the file
file:/etc/otel-collector-config.yaml: open
/etc/otel-collector-config.yaml: permission denied
How can I solve it?
Related
We are using Minio for local testing of S3 AND we have created docker-compose file with Minio and our app dependency is as follows:
Docker-Compose File:
version: "2.1"
services:
minio:
image: minio/minio
container_name: minio
ports:
- 9001:9001
volumes:
- minio_storage:/data
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
MINIO_REGION: us-east-1
command: server /data --console-address ":9001"
mem_limit: 512m
populate-minio-data:
container_name: "minio-data"
image: minio/mc
volumes:
- ./hello.txt:/tmp/hello.txt
entrypoint: >
/bin/sh -c "
/usr/bin/mc config host rm local;
/usr/bin/mc config host add --quiet --api s3v4 local http://minio:9001 minio minio123;
/usr/bin/mc mb --quiet local/somebucketname1/;
/usr/bin/mc policy set public local/somebucketname1;
/usr/bin/mc cp /tmp/hello.txt local/somebucketname1/hello.txt;
"
depends_on:
- minio
archive-api-app:
image: openjdk:11
container_name: "archive-api-app"
ports:
- 8091:6001
volumes:
- /home/apcuser/dev/projects/ea-archive-service-v2/projects/application/archive-api:/app
command: [ 'java', '-jar', '/app/build/libs/archive-api-1.0.0.jar' ]
env_file:
- ./vars/default.env
volumes:
minio_storage:
And In java code, I have configured MINIO URL as S3 Endpoint as follows:
#Bean
public AmazonS3 getS3Client() {
return AmazonS3ClientBuilder.standard()
.withClientConfiguration(new ClientConfiguration().withMaxConnections(maxConnections)
.withConnectionTimeout(connectionTimeout).withMaxErrorRetry(maxRetry))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://minio:9001", "us-east-1"))
.build();
}
Once, I run docker-compose file,I am able to see minio-ui in my local Linux machine as follows:
But I am not seeing any data in Minio, instead, I am seeing the below error while uploading data in minio:
Attaching to minio-data
minio-data | Removed `local` successfully.
minio-data | Added `local` successfully.
minio-data | mc: <ERROR> Unable to make bucket `local/somebucketname1/`. S3 API Requests must be made to API port.
minio-data | mc: <ERROR> Unable to set policy `public` for `local/somebucketname1`. S3 API Requests must be made to API port.
minio-data | `/tmp/hello.txt` -> `local/somebucketname1/hello.txt`
minio-data | mc: <ERROR> Failed to copy `/tmp/hello.txt`. S3 API Requests must be made to API port.
minio-data | Total: 0 B, Transferred: 0 B, Speed: 0 B/s
Even same error I am seeing When I am trying to list MINIO data from my local linux host machine:
export AWS_ACCESS_KEY_ID=minio
export AWS_SECRET_ACCESS_KEY=minio123
export AWS_REGION=us-east-1
aws --endpoint-url http://127.0.0.1:9001 s3 ls
**An error occurred (InvalidArgument) when calling the ListBuckets operation: S3 API Requests must be made to API port.**
Can anyone help here, please?
The error message indicates that you need to use the API port instead of console port while using mc.
/usr/bin/mc config host add --quiet --api s3v4 local http://minio:9001 minio minio123;
You need to use port 9000 instead of 9001.
I'm starting on a fresh system to deploy a simple docker-compose with swag and authelia. Previously I've just included my "secrets" in the .env file or directly in authelia configuration file, but I'm trying to employee some best practices here and properly hide the secrets using docker secrets. However, when starting up my containers, authelia is complaining about permission denied when trying to access.
In the different guides I've looked at, none of them mention permissions on anything other than the secrets directory/files to be root owned and 600 permissions.
My docker directory is in ~/docker with the secrets in ~/docker/secrets. The secrets directory is root owned with 600 permissions. My docker directories is owned by uid 1100:1100, and in my docker compose, I have the following docker-compose (slightly edited for public):
version: "3.9"
secrets:
authelia_duo_api_secret_key:
file: $DOCKERSECRETS/authelia_duo_api_secret_key
authelia_jwt_secret:
file: $DOCKERSECRETS/authelia_jwt_secret
authelia_notifier_smtp_password:
file: $DOCKERSECRETS/authelia_notifier_smtp_password
authelia_session_secret:
file: $DOCKERSECRETS/authelia_session_secret
authelia_storage_encryption_key:
file: $DOCKERSECRETS/authelia_storage_encryption_key
x-environment: &default-env
TZ: $TZ
PUID: $PUID
PGID: $PGID
services:
swag:
image: ghcr.io/linuxserver/swag
container_name: swag
cap_add:
- NET_ADMIN
environment:
<<: *default-env
URL: $DOMAINNAME
SUBDOMAINS: wildcard
VALIDATION: dns
CERTPROVIDER: zerossl #optional
DNSPLUGIN: cloudflare #optional
EMAIL: <edit>
DOCKER_MODS: linuxserver/mods:swag-dashboard
volumes:
- $DOCKERDIR/appdata/swag:/config
ports:
- 443:443
restart: unless-stopped
authelia:
image: ghcr.io/authelia/authelia:latest
container_name: authelia
restart: unless-stopped
volumes:
- $DOCKERDIR/appdata/authelia:/config
user: "1100:1100"
secrets:
- authelia_jwt_secret
- authelia_session_secret
- authelia_notifier_smtp_password
- authelia_duo_api_secret_key
- authelia_storage_encryption_key
environment:
AUTHELIA_JWT_SECRET_FILE: /run/secrets/authelia_jwt_secret
AUTHELIA_SESSION_SECRET_FILE: /run/secrets/authelia_session_secret
AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE: /run/secrets/authelia_notifier_smtp_password
AUTHELIA_DUO_API_SECRET_KEY_FILE: /run/secrets/authelia_duo_api_secret_key
AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: /run/secrets/authelia_storage_encryption_key
And the errors I'm getting in my log are:
authelia | 2022-07-28T23:45:05.872818847Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_session_secret into key 'session.secret': open /run/secrets/authelia_session_secret: permission denied"
authelia | 2022-07-28T23:45:05.872844527Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_jwt_secret into key 'jwt_secret': open /run/secrets/authelia_jwt_secret: permission denied"
authelia | 2022-07-28T23:45:05.872847757Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_duo_api_secret_key into key 'duo_api.secret_key': open /run/secrets/authelia_duo_api_secret_key: permission denied"
authelia | 2022-07-28T23:45:05.872850957Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_storage_encryption_key into key 'storage.encryption_key': open /run/secrets/authelia_storage_encryption_key: permission denied"
authelia | 2022-07-28T23:45:05.872853157Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_notifier_smtp_password into key 'notifier.smtp.password': open /run/secrets/authelia_notifier_smtp_password: permission denied"
authelia | 2022-07-28T23:45:05.872855307Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: option 'jwt_secret' is required"
authelia | 2022-07-28T23:45:05.872857277Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: duo_api: option 'secret_key' is required when duo is enabled but it is missing"
authelia | 2022-07-28T23:45:05.872859417Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: storage: option 'encryption_key' is required"
authelia | 2022-07-28T23:45:05.872861397Z time="2022-07-28T21:15:05-02:30" level=fatal msg="Can't continue due to the errors loading the configuration"
I'm sure I'm missing something simple here. Does everything have to be run as root in order to access the secrets? Does that mean changing all my docker directory in my home folder to root, just to hide credentials? I'm a little confused by this, any help would be greatly appreciated.
I had similar permissions errors which i could get rid of by using docker volumes. I oriented myself on this example here.
I am using keyclaock in docker compose file and try to import realm.json file as mentioned below but importing realm fails with this error
15:07:38,919 WARN [org.keycloak.services] (ServerService Thread Pool -- 60) KC-SERVICES0005: Unable to import realm boost from file /opt/jboss/keycloak/realm-config/keycloak-realm.json.: java.lang.IllegalArgumentException: No such provider 'declarative-user-profile'
Code from docker compose
keycloak:
image: 'wizzn/keycloak:14'
environment:
KEYCLOAK_IMPORT: /opt/jboss/keycloak/realm-config/keycloak-realm.json -Dkeycloak.profile.feature.upload_scripts=enabled
volumes:
- ./keycloak-init:/opt/jboss/keycloak/realm-config
I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!
in Centos7, I'm trying to start 2 containers by docker-compose when I get this error:
error: container_linux.go:235: starting container process caused keycloak/keycloak-gatekeeper
# ls
docker-compose.yml Dockerfile gatekeeper-be.conf gatekeeper-fe.conf nginx-conf.d README.MD
=================
# cat docker-compose
version: '3.2'
networks:
network-bo-network:
driver: "bridge"
ipam:
config:
- subnet: "173.200.1.0/24"
gatekeeper-fe:
image: keycloak/keycloak-gatekeeper:latest
command: /keycloak-proxy --config /opt/keycloak-gatekeeper/gatekeeper.conf
volumes:
- ./gatekeeper-fe.conf:/opt/keycloak-gatekeeper/gatekeeper.conf
networks:
network-bo-network:
ipv4_address: "173.200.1.3"
network-bo-nginx:
image: nginx:1.17
ports:
- "83:80"
volumes:
- ./nginx-conf.d:/etc/nginx/conf.d
networks:
network-bo-network:
ipv4_address: "173.200.1.5"
===========================================
cat gatekeeper-fe.conf
ClientID is the client id
client-id: client-bo-app
## ClientSecret is the secret for AS
client-secret: xxxxxxxxxxxxxxxxxxx
## DiscoveryURL is the url for the keycloak server
discovery-url: https://xxxxxxxxxxxxxxxxxxxx
## SkipOpenIDProviderTLSVerify skips the tls verification for openid provider communication
skip-openid-provider-tls-verify: true
## EnableDefaultDeny indicates we should deny by default all requests
enable-default-deny: true
## EnableRefreshTokens indicate's you wish to ignore using refresh tokens and re-auth on expiration of access token
enable-refresh-tokens: true
## EncryptionKey is the encryption key used to encrypt the refresh token
encryption-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
## Listen is the binding interface
listen: :8081
## Upstream is the upstream endpoint i.e whom were proxying to
upstream-url: http://173.200.1.1:8082
## EnableLogging indicates if we should log all the requests
enable-logging: true
## EnableJSONLogging is the logging format
enable-json-logging: true
## PreserveHost preserves the host header of the proxied request in the upstream request
preserve-host: true
## NoRedirects informs we should hand back a 401 not a redirect
no-redirects: true
## AddClaims is a series of claims that should be added to the auth headers
add-claims:
- email
- given_name
- family_name
- name
## Resources configuration
resources:
- uri: /api/v1/metadata
methods:
- GET
white-listed: true
==================================================
# docker-compose up
WARNING: Found orphan containers (network-bo-dev_network-bo-postgres_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
network-bo-dev_network-bo-nginx_1 is up-to-date
Creating network-bo-dev_gatekeeper-fe_1 ... error
ERROR: for network-bo-dev_gatekeeper-fe_1 Cannot start service gatekeeper-fe: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"
ERROR: for gatekeeper-fe Cannot start service gatekeeper-fe: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"
ERROR: Encountered errors while bringing up the project.
You should provide https://stackoverflow.com/help/minimal-reproducible-example - provided docker-compose doesn't have correct syntax.
A few obvious errors:
gatekeeper binary in the image has /opt/keycloak-gatekeeper
location, not /keycloak-proxy, but see next point
used images uses entrypoint=/opt/keycloak-gatekeeper=> command just needs that part after binary, e.g.: --config /opt/keycloak-gatekeeper/gatekeeper.conf
first line in gatekeeper-fe.conf should be comment