I'm trying to set up our project to use Docker. I've created an image based on this image: https://github.com/mozilla/docker-sbt.
The command in my docker-compose.yml file is:
sbt -J-XX:MaxMetaspaceSize=500m -Dlogger.file=conf/dev-logback.xml -Dconfig.file=$dev -Dhttp.port=$srfPort -Dhttps.port=9443 -Djdbcdslog.showTime=true -J-Dakka.http.parsing.max-uri-length=16k run
(this command works in the non-docker environment.)
Below is the output of docker-compose up. Notice that right after it starts it says "Stopping server...". When I watch the console I notice that there is no pause between the "Listening" line and the "stopping" line.
[info] Loading settings for project my-project from build.sbt ...
[info] Set current project to my-project (in build file:/home/my-name/my-project/app/)
--- (Running the application, auto-reloading is enabled) ---
[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:9000
[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0.0.0.0:9443
(Server started, use Enter to stop and go back to the console...)
[info] p.c.s.AkkaHttpServer - Stopping server...
[warn] StaticRoutesGenerator is deprecated. Please use InjectedRoutesGenerator or a custom router instead.
[warn] For more info see https://www.playframework.com/documentation/2.6.x/JavaRouting#Dependency-Injection
[warn] StaticRoutesGenerator is deprecated. Please use InjectedRoutesGenerator or a custom router instead.
[warn] For more info see https://www.playframework.com/documentation/2.6.x/JavaRouting#Dependency-Injection
2020-04-07 21:32:10,984~[WARN]~Logger configuration in conf files is deprecated and has no effect. Use a logback configuration file instead.~
2020-04-07 21:32:14,313~[INFO]~Slf4jLogger started~
2020-04-07 21:32:15,413~[INFO]~Database [default] initialized at jdbc:mysql://srf_db:3306/srf2?socketTimeout=10000&verifyServerCertificate=false&useSSL=false&requireSSL=false~
2020-04-07 21:32:15,481~[INFO]~Creating Pool for datasource 'default'~
2020-04-07 21:32:15,514~[INFO]~HikariPool-1 - Starting...~
2020-04-07 21:32:16,066~[INFO]~HikariPool-1 - Start completed.~
2020-04-07 21:32:40,261~[INFO]~Application started (Dev)~
2020-04-07 21:32:40,319~[INFO]~Shutting down connection pool.~
2020-04-07 21:32:40,334~[INFO]~HikariPool-1 - Shutdown initiated...~
2020-04-07 21:32:40,366~[INFO]~HikariPool-1 - Shutdown completed.~
[success] Total time: 140 s (02:20), completed Apr 8, 2020 1:32:40 AM
I have no idea where to look for the error or what the nature of the error could be: a missing file? a missing folder? a missing dependency? a missing setup command? permission error?
I am at a loss for how to debug further. If you have any guesses about where to look, it would be greatly appreciated.
[edit]
Thanks to a hint from cbley below, I figured out what my docker-compose.yml file needs to look like:
sbt:
image: my-image
stdin_open: true
tty: true
depends_on:
- db
environment:
- MYSQL_HOST=db
- USER
volumes:
- ./:/home/docker1/
command: bash -c "sbt etc..."
ports:
- "9000:9000"
- "9443:9443"
Notice the added stdin_open and tty lines.
When running a Play service interactively (in development mode), it waits for the user to press Enter.
When inside the container, standard input is not connected to a TTY and thus the read from standard input fails immediately which causes the server to stop, which also exits SBT since the run task was the only one.
You can build a Docker image from your Play service by running sbt docker:publishLocal. (there's no need to have SBT inside the container)
Related
I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.
I am trying to run a gitlab pipeline to run my testcases using maven image. My testcases uses testcontainer. But somehow when I am trying to run testcontainer inside maven image it's not working. I tried couple of solutions provided on different online but nothing worked.
.gillab-ci.yml
services:
- docker:dind
variables:
# Instruct Testcontainers to use the daemon of DinD.
DOCKER_HOST: "tcp://docker:2375"
# Instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
# Improve performance with overlayfs.
DOCKER_DRIVER: overlay2
test:
image: maven:3.8.2-jdk-11
stage: test
script:
- chmod -R 777 /var/
- mvn -e test -f project_folder/pom.xml
Error I am getting is :
0:58:18.653 [main] DEBUG
org.testcontainers.dockerclient.DockerClientProviderStrategy -
EnvironmentAndSystemPropertyClientProviderStrategy: failed with
exception TimeoutException (Timeout waiting for result with
exception). Root cause ConnectException (Connection refused
(Connection refused))
10:58:18.654 [main] DEBUG
org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root cause NoSuchFileException (/var/run/docker.sock)
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy - Could
not find a valid Docker environment. Please check configuration.
Attempted configurations were:
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy -
EnvironmentAndSystemPropertyClientProviderStrategy: failed with
exception TimeoutException (Timeout waiting for result with
exception). Root cause ConnectException (Connection refused
(Connection refused))
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root cause NoSuchFileException (/var/run/docker.sock)
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy - As no
valid configuration was found, execution cannot continue
10:58:18.658 [main] ERROR ***********.***Service - Error in starting
Container : java.lang.IllegalStateException: Could not find a valid
Docker environment.
It seems like your service is not exposed correctly. I believe this is a TLS issue with your docker socket. Have you tried starting your DIND service explicitly disabling TLS?
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
alias: docker
command: ["--tls=false"] # this right here
I have followed the instructions, I think, and have come up with the following configuration:
version: '3.9'
services:
flask:
image: ops:imgA
ports:
- 5000:5000
volumes:
- /opt/models:/opt/models
entrypoint: demo flask
streamlit:
image: ops:imgB
ports:
- 8501:8501
entrypoint: streamlit run --server.port 8501 demo -- stream --flask-hostname flask
The --flask-hostname flask sets the host name used in an http connect, i.e.: http://flask:5000. I can set it to anything.
The basic problem here is that I can spin up one of these images, install tmux, and run everything within a single image.
But, when I split it across multiple images and use docker-compose up (which seems better than tmux), the containers can't seem to connect to each other.
I have rattled around the documentation on docker's website, but I've moved on to the troubleshooting stage. This seems to be something that should "just work" (since there are few questions along these lines). I have total control of the box I am using, and can open or close whatever ports needed.
Mainly, I am trying to figure out how to allow, with 100% default settings nothing complicated, these two services (flask and streamlit) to speak to each other.
There must be 1 or 2 settings that I need to change, and that is it.
Any ideas?
Update
I can access all of the services externally, so I am going to open up external connections between the services (using the external IP) as a "just work" quick fix, but obviously getting the composition to work internally would be the best option.
I have also confirmed that the docker-compose and docker versions are up to date.
Update-2: changed from flask#127.0.0.1 to flask#0.0.0.0
Flask output:
flask_1 | * Serving Flask app "flask" (lazy loading)
flask_1 | * Environment: production
flask_1 | WARNING: This is a development server. Do not use it in a production deployment.
flask_1 | Use a production WSGI server instead.
flask_1 | * Debug mode: on
flask_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | 2020-12-19 02:22:16.449 INFO werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | INFO:werkzeug: * Restarting with inotify reloader
flask_1 | 2020-12-19 02:22:16.465 INFO werkzeug: * Restarting with inotify reloader
flask_1 | WARNING:werkzeug: * Debugger is active!
flask_1 | 2020-12-19 02:22:22.003 WARNING werkzeug: * Debugger is active!
Streamlit:
streamlit_1 |
streamlit_1 | You can now view your Streamlit app in your browser.
streamlit_1 |
streamlit_1 | Network URL: http://172.18.0.3:8501
streamlit_1 | External URL: http://71.199.156.142:8501
streamlit_1 |
streamlit_1 | 2020-12-19 02:22:11.389 Generating new fontManager, this may take some time...
And the streamlit error message:
ConnectionError:
HTTPConnectionPool(host='flask', port=5000):
Max retries exceeded with url: /foo/bar
(Caused by NewConnectionError(
'<urllib3.connection.HTTPConnection object at 0x7fb860501d90>:
Failed to establish a new connection:
[Errno 111] Connection refused'
)
)
Update-3: Hitting refresh fixed it.
The server process must be listening on the special "all interfaces" address 0.0.0.0. Many development-type servers by default listen on "localhost only" 127.0.0.1, but in Docker each container has its own private notion of localhost. If you use tmux or docker exec to run multiple processes inside a container, they have the same localhost and can connect to each other, but if the client and server are running in different containers, the request doesn't arrive on the server's localhost interface, and if the server is listening on "localhost only" it won't receive it.
Your setup is otherwise correct, with only the docker-compose.yml you include in the question. Some other common problems:
You must connect to the port the server process is listening on inside the container. If you remap it externally with ports:, that's ignored, and you'd connect to the second ports: number. Correspondingly, ports: aren't required. (expose: also isn't required and doesn't do anything at all.)
The client may need to wait for the server to start up. If the client depends_on: [flask] the host name will usually resolve (unless the server dies immediately) but if it takes a while to start up you will still get "connection refused" errors. See Docker Compose wait for container X before starting Y.
Neither container may use network_mode: host. This disables Docker's networking features entirely.
If you manually declare networks:, both containers need to be on the same network. You do not need to explicitly create a network for inter-container communication to work: Compose provides a default network for you, which is used if nothing else is declared.
Use the Compose service names as host names. You don't need to explicitly specify container_name: or links:.
I've been trying, and failing, to get Percona Server (version 8 on CentOS) running as a lone service inside a docker-compose.yml file. The error that keeps coming up is:
mysql | 2020-03-16T23:04:25.189164Z 0 [ERROR] [MY-010270] [Server] Can't start server : Bind on unix socket: File name too long
mysql | 2020-03-16T23:04:25.189373Z 0 [ERROR] [MY-010258] [Server] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
mysql | 2020-03-16T23:04:25.190581Z 0 [ERROR] [MY-010119] [Server] Aborting
mysql | 2020-03-16T23:04:26.438533Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.18-9) Percona Server (GPL), Release 9, Revision 53e606f.
My docker-compose.yml file is as follows:
version: '3.7'
services:
mysql:
container_name: mysql
image: percona:8-centos
volumes:
- ./docker/mysql/setup:/docker-entrypoint-initdb.d
- ./docker/mysql/data:/var/lib/mysql
- ./docker/mysql/conf:/etc/mysql/conf.d:ro
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=<redacted>
- MYSQL_USER=<redacted>
- MYSQL_PASSWORD=<redacted>
stop_grace_period: 20s
restart: always
A few things to note:
My my.cnf file, which lives on the host under docker/mysql/conf/, declares the location of the socket file as /var/run/mysql.sock instead of /var/lib/mysql/mysql.sock. Why would mysqld still be trying to use a different socket file path than the one I declared in my own config file? (And yes, my config file IS being picked up because when it used to have deprecated options declared inside it, mysqld complained and failed to start.)
In the beginning, I kept the socket file path setting alone and allowed it to use the default location; however, it resulted in the same exact error.
The documentation at the Percona Docker Hub page contains contradictions, one of the important ones being that they mention the config directory /etc/my.cnf.d inside the container, and then when they give an example they instead mention /etc/mysql/conf.d; the discrepancy makes me lose confidence in the entire rest of the documentation. Indeed, my lack of confidence now seems well-placed, since the official image fails to run properly out of the box.
So, does anyone know how to use the official Percona images? (Or am I going to be forced to roll my own service using my own Dockerfile?)
I was also getting the same error on mac os.
So getting a hint from error: "File name too long", I moved my entire project into home directory, so that my compose file was at path:~/myproject/docker-compose.yml. (May be you can try moving to root dir, just to avoid any confusion to what ~/ expands to.)
And it did the trick and mysql image was up again without any error.
PS: I am not saying that you need to place your project in homedir, but you need to find smallest folder path that works for your project.
I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d