Running Percona server in Docker fails with socket error - docker

I've been trying, and failing, to get Percona Server (version 8 on CentOS) running as a lone service inside a docker-compose.yml file. The error that keeps coming up is:
mysql | 2020-03-16T23:04:25.189164Z 0 [ERROR] [MY-010270] [Server] Can't start server : Bind on unix socket: File name too long
mysql | 2020-03-16T23:04:25.189373Z 0 [ERROR] [MY-010258] [Server] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
mysql | 2020-03-16T23:04:25.190581Z 0 [ERROR] [MY-010119] [Server] Aborting
mysql | 2020-03-16T23:04:26.438533Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.18-9) Percona Server (GPL), Release 9, Revision 53e606f.
My docker-compose.yml file is as follows:
version: '3.7'
services:
mysql:
container_name: mysql
image: percona:8-centos
volumes:
- ./docker/mysql/setup:/docker-entrypoint-initdb.d
- ./docker/mysql/data:/var/lib/mysql
- ./docker/mysql/conf:/etc/mysql/conf.d:ro
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=<redacted>
- MYSQL_USER=<redacted>
- MYSQL_PASSWORD=<redacted>
stop_grace_period: 20s
restart: always
A few things to note:
My my.cnf file, which lives on the host under docker/mysql/conf/, declares the location of the socket file as /var/run/mysql.sock instead of /var/lib/mysql/mysql.sock. Why would mysqld still be trying to use a different socket file path than the one I declared in my own config file? (And yes, my config file IS being picked up because when it used to have deprecated options declared inside it, mysqld complained and failed to start.)
In the beginning, I kept the socket file path setting alone and allowed it to use the default location; however, it resulted in the same exact error.
The documentation at the Percona Docker Hub page contains contradictions, one of the important ones being that they mention the config directory /etc/my.cnf.d inside the container, and then when they give an example they instead mention /etc/mysql/conf.d; the discrepancy makes me lose confidence in the entire rest of the documentation. Indeed, my lack of confidence now seems well-placed, since the official image fails to run properly out of the box.
So, does anyone know how to use the official Percona images? (Or am I going to be forced to roll my own service using my own Dockerfile?)

I was also getting the same error on mac os.
So getting a hint from error: "File name too long", I moved my entire project into home directory, so that my compose file was at path:~/myproject/docker-compose.yml. (May be you can try moving to root dir, just to avoid any confusion to what ~/ expands to.)
And it did the trick and mysql image was up again without any error.
PS: I am not saying that you need to place your project in homedir, but you need to find smallest folder path that works for your project.

Related

Keycloak 17.0.1 Import Realm on Docker / Docker-Compose Startup

I am trying to find a way to import a realm in Keycloak version 17.0.1 that can be done at starting up a docker container (with docker-compose). I want to be able to do this in "start" mode and not "start-dev" mode as in my experience so far "start-dev" in 17 is forcing an H2/in-mem database and not allowing me to point to an external db which I would like to do to more closely resemble dev/prod environments when running locally.
Things I've tried:
1) It appears that according to recent conversations on Github (Issue 10216 and Issue 10754 to name a couple) that the environment variable that used to allow this (KEYCLOAK_IMPORT or KC_IMPORT_REALM in some versions) is no longer a trigger for this. In my attempts it also did not work for version 17.0.1.
2) I've also tried appending the following command in my docker-compose setup for keycloak and had no luck (also tried with just "start") - It appears to just ignore the command (no error or anything):
command: ["start-dev", "-Dkeycloak.import=/tmp/my-realm.json"]
3) I tried running the kc.sh command "import" in the Dockerfile (both before and after Entrypoint/start) but got error: Unmatched arguments from index 1: '/opt/keycloak/bin/kc.sh', 'im port', '--file', '/tmp/my-realm.json'
4) I've shifted gears and have tried to see if it is possible to just do it after the container starts (even with manual intervention) just to get some sanity restored. I attempted to use the admin-cli but after quite a few different attempts at different points/endpoints etc. I just get that localhost refuses to connect.
bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password adminpassword
Responds when hitting the following ports as shown:
8080: Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
8443: Failed to send request - localhost:8443 failed to respond
I am sure there are other ways that I've tried and am forgetting - I've kind of spun my wheels at this point.
My code (largely the same as the latest docs on the Keycloak website):
Dockerfile:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
ENV KC_HOSTNAME=localhost
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start" ]
Docker-compose.yml:
version: "3"
services:
keycloak:
build:
context: .
volumes:
- ./my-realm.json:/tmp/my-realm.json:ro
env_file:
- .env
environment:
KC_DB_URL: ${POSTGRESQL_URL}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
ports:
- 8080:8080
- 8443:8443 # <-- I've tried with only 8080 and with only 8443 as well. 8443 appears to be the only that I can get the admin console ui to even work on though.
networks:
- my_net
networks:
my_net:
name: my_net
Any suggestion on how to do this in a programmatic + "dev-opsy" way would be greatly appreciated. I'd really like to get this to work but am confused on how to get past this.
Importing realm upon docker initialization thru configuration is not supported yet. See https://github.com/keycloak/keycloak/issues/10216. They might release this feature in next release v18.
The workarounds people had shared in github thread is create own docker image and import the realm thru json file when building it.
FROM quay.io/keycloak/keycloak:17.0.1
# Make the realm configuration available for import
COPY realm-and-users.json /opt/keycloak_import/
# Import the realm and user
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak_import/realm-and-users.json
# The Keycloak server is configured to listen on port 8080
EXPOSE 8080
EXPOSE 8443
# Import the realm on start-up
CMD ["start-dev"]
As #tboom said, it was not supported yet by keycloak 17.x. But it is now supported by keycloak 18.x using the --import-realm option :
bin/kc.[sh|bat] [start|start-dev] --import-realm
This feature does not work as it was before. The JSON file path must not be specified anymore: the JSON file only has to be copied in the <KEYCLOAK_DIR>/data/import directory (multiple JSON files supported). Note that the import operation is skipped if the realm already exists, so incremental updates are not possible anymore (at least for the time being).
This feature is documented on https://www.keycloak.org/server/importExport#_importing_a_realm_during_startup.

How does one connect two services in the local docker-compose network?

I have followed the instructions, I think, and have come up with the following configuration:
version: '3.9'
services:
flask:
image: ops:imgA
ports:
- 5000:5000
volumes:
- /opt/models:/opt/models
entrypoint: demo flask
streamlit:
image: ops:imgB
ports:
- 8501:8501
entrypoint: streamlit run --server.port 8501 demo -- stream --flask-hostname flask
The --flask-hostname flask sets the host name used in an http connect, i.e.: http://flask:5000. I can set it to anything.
The basic problem here is that I can spin up one of these images, install tmux, and run everything within a single image.
But, when I split it across multiple images and use docker-compose up (which seems better than tmux), the containers can't seem to connect to each other.
I have rattled around the documentation on docker's website, but I've moved on to the troubleshooting stage. This seems to be something that should "just work" (since there are few questions along these lines). I have total control of the box I am using, and can open or close whatever ports needed.
Mainly, I am trying to figure out how to allow, with 100% default settings nothing complicated, these two services (flask and streamlit) to speak to each other.
There must be 1 or 2 settings that I need to change, and that is it.
Any ideas?
Update
I can access all of the services externally, so I am going to open up external connections between the services (using the external IP) as a "just work" quick fix, but obviously getting the composition to work internally would be the best option.
I have also confirmed that the docker-compose and docker versions are up to date.
Update-2: changed from flask#127.0.0.1 to flask#0.0.0.0
Flask output:
flask_1 | * Serving Flask app "flask" (lazy loading)
flask_1 | * Environment: production
flask_1 | WARNING: This is a development server. Do not use it in a production deployment.
flask_1 | Use a production WSGI server instead.
flask_1 | * Debug mode: on
flask_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | 2020-12-19 02:22:16.449 INFO werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | INFO:werkzeug: * Restarting with inotify reloader
flask_1 | 2020-12-19 02:22:16.465 INFO werkzeug: * Restarting with inotify reloader
flask_1 | WARNING:werkzeug: * Debugger is active!
flask_1 | 2020-12-19 02:22:22.003 WARNING werkzeug: * Debugger is active!
Streamlit:
streamlit_1 |
streamlit_1 | You can now view your Streamlit app in your browser.
streamlit_1 |
streamlit_1 | Network URL: http://172.18.0.3:8501
streamlit_1 | External URL: http://71.199.156.142:8501
streamlit_1 |
streamlit_1 | 2020-12-19 02:22:11.389 Generating new fontManager, this may take some time...
And the streamlit error message:
ConnectionError:
HTTPConnectionPool(host='flask', port=5000):
Max retries exceeded with url: /foo/bar
(Caused by NewConnectionError(
'<urllib3.connection.HTTPConnection object at 0x7fb860501d90>:
Failed to establish a new connection:
[Errno 111] Connection refused'
)
)
Update-3: Hitting refresh fixed it.
The server process must be listening on the special "all interfaces" address 0.0.0.0. Many development-type servers by default listen on "localhost only" 127.0.0.1, but in Docker each container has its own private notion of localhost. If you use tmux or docker exec to run multiple processes inside a container, they have the same localhost and can connect to each other, but if the client and server are running in different containers, the request doesn't arrive on the server's localhost interface, and if the server is listening on "localhost only" it won't receive it.
Your setup is otherwise correct, with only the docker-compose.yml you include in the question. Some other common problems:
You must connect to the port the server process is listening on inside the container. If you remap it externally with ports:, that's ignored, and you'd connect to the second ports: number. Correspondingly, ports: aren't required. (expose: also isn't required and doesn't do anything at all.)
The client may need to wait for the server to start up. If the client depends_on: [flask] the host name will usually resolve (unless the server dies immediately) but if it takes a while to start up you will still get "connection refused" errors. See Docker Compose wait for container X before starting Y.
Neither container may use network_mode: host. This disables Docker's networking features entirely.
If you manually declare networks:, both containers need to be on the same network. You do not need to explicitly create a network for inter-container communication to work: Compose provides a default network for you, which is used if nothing else is declared.
Use the Compose service names as host names. You don't need to explicitly specify container_name: or links:.

sbt immediately stops when running in docker container

I'm trying to set up our project to use Docker. I've created an image based on this image: https://github.com/mozilla/docker-sbt.
The command in my docker-compose.yml file is:
sbt -J-XX:MaxMetaspaceSize=500m -Dlogger.file=conf/dev-logback.xml -Dconfig.file=$dev -Dhttp.port=$srfPort -Dhttps.port=9443 -Djdbcdslog.showTime=true -J-Dakka.http.parsing.max-uri-length=16k run
(this command works in the non-docker environment.)
Below is the output of docker-compose up. Notice that right after it starts it says "Stopping server...". When I watch the console I notice that there is no pause between the "Listening" line and the "stopping" line.
[info] Loading settings for project my-project from build.sbt ...
[info] Set current project to my-project (in build file:/home/my-name/my-project/app/)
--- (Running the application, auto-reloading is enabled) ---
[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:9000
[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0.0.0.0:9443
(Server started, use Enter to stop and go back to the console...)
[info] p.c.s.AkkaHttpServer - Stopping server...
[warn] StaticRoutesGenerator is deprecated. Please use InjectedRoutesGenerator or a custom router instead.
[warn] For more info see https://www.playframework.com/documentation/2.6.x/JavaRouting#Dependency-Injection
[warn] StaticRoutesGenerator is deprecated. Please use InjectedRoutesGenerator or a custom router instead.
[warn] For more info see https://www.playframework.com/documentation/2.6.x/JavaRouting#Dependency-Injection
2020-04-07 21:32:10,984~[WARN]~Logger configuration in conf files is deprecated and has no effect. Use a logback configuration file instead.~
2020-04-07 21:32:14,313~[INFO]~Slf4jLogger started~
2020-04-07 21:32:15,413~[INFO]~Database [default] initialized at jdbc:mysql://srf_db:3306/srf2?socketTimeout=10000&verifyServerCertificate=false&useSSL=false&requireSSL=false~
2020-04-07 21:32:15,481~[INFO]~Creating Pool for datasource 'default'~
2020-04-07 21:32:15,514~[INFO]~HikariPool-1 - Starting...~
2020-04-07 21:32:16,066~[INFO]~HikariPool-1 - Start completed.~
2020-04-07 21:32:40,261~[INFO]~Application started (Dev)~
2020-04-07 21:32:40,319~[INFO]~Shutting down connection pool.~
2020-04-07 21:32:40,334~[INFO]~HikariPool-1 - Shutdown initiated...~
2020-04-07 21:32:40,366~[INFO]~HikariPool-1 - Shutdown completed.~
[success] Total time: 140 s (02:20), completed Apr 8, 2020 1:32:40 AM
I have no idea where to look for the error or what the nature of the error could be: a missing file? a missing folder? a missing dependency? a missing setup command? permission error?
I am at a loss for how to debug further. If you have any guesses about where to look, it would be greatly appreciated.
[edit]
Thanks to a hint from cbley below, I figured out what my docker-compose.yml file needs to look like:
sbt:
image: my-image
stdin_open: true
tty: true
depends_on:
- db
environment:
- MYSQL_HOST=db
- USER
volumes:
- ./:/home/docker1/
command: bash -c "sbt etc..."
ports:
- "9000:9000"
- "9443:9443"
Notice the added stdin_open and tty lines.
When running a Play service interactively (in development mode), it waits for the user to press Enter.
When inside the container, standard input is not connected to a TTY and thus the read from standard input fails immediately which causes the server to stop, which also exits SBT since the run task was the only one.
You can build a Docker image from your Play service by running sbt docker:publishLocal. (there's no need to have SBT inside the container)

Filebeat not running using docker-compose: setting 'filebeat.prospectors' has been removed

I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d

Neo4j + Docker - unable to create JVM

I'm trying to build a docker container for a Neo4j DB. While running the db locally isn't an issue, the container is having issues starting the JVM. Looking through the neo4j:3.2.2 image I'm building my own Dockerfile from I can't see us using different versions of the JRE. The issue seems to stem from they neo4j.conf, where It crashes on unrecognized VM option flags, such as UseG1GC and OmitStackTraceInFastThrow
The Dockerfile is fairly short
FROM neo4j:3.2.2
ADD ./neo4j.conf /var/lib/neo4j/conf/.
ADD ./data/. /var/lib/neo4j/import
ADD ./scripts/. .
I've also got a docker-compose.yml
version: '2'
services:
neo4j:
image: eu.gcr.io/tine-matsans-v2/neo4j:develop
container_name: neo4j
build:
context: ./neo4j/.
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
environment:
- NEO4J_USERNAME=neo4j
- NEO4J_PASSWORD=litago
I'm on a Windows 10 machine, but the image builds a unix container. My colleague has no issues whatsoever with running the container, using the same configs, though he's using a Mac. That should not be relevant as the issue is within the container.
neo4j | Active database: graph.db
neo4j | Directories in use:
neo4j | home: /var/lib/neo4j
neo4j | config: /var/lib/neo4j/conf
neo4j | logs: /var/lib/neo4j/logs
neo4j | plugins: /var/lib/neo4j/plugins
neo4j | import: /var/lib/neo4j/import
neo4j | data: /var/lib/neo4j/data
neo4j | certificates: /var/lib/neo4j/certificates
neo4j | run: /var/lib/neo4j/run
neo4j | Starting Neo4j.
neo4j | Unrecognized VM option 'UseG1GC
neo4j | Did you mean '(+/-)UseG1GC'?
neo4j | Error: Could not create the Java Virtual Machine.
neo4j | Error: A fatal exception has occurred. Program will exit.
Has anyone run into similar issues? I've searched through several stack overflow posts as well as tried to read up on how the JVM and Containers work, but I can't find any solid information to help me sort this out.
I ran into this same issue. Turned out to be a the line endings on the neo4j.conf file. I used the VS code to switch the line endings to 'LF' and ran docker-compose up and everything worked out. Hope this helps.
Visual Studio Code: How to show line endings
Had to stop the docker-machine, go to the conf file, using notepadd++ convert file to UTF8 even if it's already utf8, edit eof to unix, save, docker-machine start, docker-compose up yey works
I easily solved this issue with Sublime. You can check your current line ending at menu -> view -> line endings. Just turn it into Unix and save.
I hope this helps others.

Resources