How to reuse service image when opening a project with devcontainer-cli? - ruby-on-rails

I'm looking for a way to make the "devcontainer open ." command to finish faster. I was looking at the log produced by that command and I noticed something:
[1895 ms] Start: Run: docker-compose --project-name hobby-on-rails -f /home/pedro/tempo-livre/hobby-on-rails/docker-compose.yml build
That got me wondering, I expected the build to run only once, like when you run docker-compose up, and not every time I start the project. I think I'm missing some configuration to tell the devcontainer that it's not necessary to build my web service again. Let's take a look at my devcontainer.yml:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.234.0/containers/docker-existing-docker-compose
// If you want to run as a non-root user in the container, see .devcontainer/docker-compose.yml.
{
"name": "Hobby On Rails",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "web",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/opt/hobbyonrails",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"castwide.solargraph",
"github.copilot",
"misogi.ruby-rubocop"
]
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
The service web is being build every time. Is it possible to prevent this from happening?
In case you would like to take a look at my docker-compose.yml:
version: "3.7"
services:
db:
image: postgres:14.4
container_name: hobbyonrails_db
ports:
- 5432:5432
env_file:
- ./.docker/env_files/.env
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- ./.docker/volumes/postgres_data:/var/lib/postgresql/data
web:
image: hobbyonrails
container_name: hobbyonrails_web
env_file:
- ./.docker/env_files/.env
build:
context: .
depends_on:
db:
condition: service_healthy
links:
- db
ports:
- 3000:3000
volumes:
- .:/opt/hobbyonrails:cached
stdin_open: true
tty: true
livereload:
image: hobbyonrails
container_name: hobbyonrails_livereload
depends_on:
- web
ports:
- 35729:35729
command: ["bundle", "exec", "guard", "-i"]
env_file:
- ./.docker/env_files/.env
volumes:
- .:/opt/hobbyonrails:cached
tailwindcsswatcher:
image: hobbyonrails
container_name: hobbyonrails_tailwindcsswatcher
depends_on:
- web
ports:
- 3035:3035
command: ["bin/rails", "tailwindcss:watch"]
env_file:
- ./.docker/env_files/.env
volumes:
- .:/opt/hobbyonrails:cached
tty: true
selenium:
container_name: hobbyonrails_selenium
image: selenium/standalone-chrome:3.141.59
ports:
- 4444:4444
volumes:
postgres_data:
What do you think?

Related

Docker Compose: depends_on with condition -> invalid type, should be an array

I have the following compose file:
version: "3"
services:
zookeeper:
image: docker-dev.art.intern/wurstmeister/zookeeper:latest
ports:
- 2181:2181
kafka:
image: docker-dev.art.intern/wurstmeister/kafka:latest
ports:
- 9092:9092
environment:
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
depends_on:
- zookeeper
app:
build:
context: ./
dockerfile: app/Dockerfile
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4020/actuator/health"]
interval: 30s
timeout: 10s
retries: 5
depends_on:
- kafka
- zookeeper
app-test:
build:
context: ./
dockerfile: test/Dockerfile
depends_on:
app:
condition: service_healthy
As you can see im implementing a healthcheck for the app and I use service_healthy condition.
But that leads to the error:
The Compose file '.\docker-compose.yml' is invalid because:
services.app-test.depends_on contains an invalid type, it should be an array
Is there a way to fix that issue?
If I change to array sanytax:
...
app-test:
build:
context: ./
dockerfile: test/Dockerfile
depends_on:
- app:
condition: service_healthy
The error changes to:
The Compose file '.\docker-compose.yml' is invalid because:
services.app-test.depends_on contains an invalid type, it should be a string
This appears to have been removed in version 3 of the docker compose specification, but then re-introduced in version 3.9.
See https://github.com/compose-spec/compose-spec/blob/master/spec.md#long-syntax-1
Note that this seems to require Compose V2, which is executed as docker compose on the latest docker binary.
See https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command
you can do that with a compose file version 2.1, but it was removed in compose file version 3.
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
yet, will advise you not to downgrade your compose file. but rather handle it appropriately with a wrapper script.
you might find wait-for-it, dockerize, and/or wait-for handy.
in compose version 3 you can use depends long syntax to specify a condition
condition: condition under which dependency is considered satisfied
service_started: is an equivalent of the short syntax described above
service_healthy: specifies that a dependency is expected to be “healthy” (as indicated by healthcheck) before starting a dependent service.
service_completed_successfully: specifies that a dependency is expected to run to successful completion before starting a dependent service.
e.g.
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
redis:
image: redis
db:
image: postgres
You're missing a - to make depends_on an array:
app-test:
build:
context: ./
dockerfile: test/Dockerfile
depends_on:
- app:
condition: service_healthy
Notice the - before app. Without it, services.app-test.depends_on will be an object, not an array. Also pay attention to the amount of spaces: if condition is in the same column as app, you will also get an undesired result.
When decoding your YAML, this is what you get (in JSON):
{
"app-test": {
"build": {
"context": "./",
"dockerfile": "test/Dockerfile"
},
"depends_on": {
"app": {
"condition": "service_healthy"
}
}
}
}
With the added -:
{
"app-test": {
"build": {
"context": "./",
"dockerfile": "test/Dockerfile"
},
"depends_on": [
{
"app": {
"condition": "service_healthy"
}
}
]
}
}

How to deal with more than one `network_mode` in a VSCode Remote dev container?

I would like to have an application, database and redis service running in a dev container where I'd be able to access my database and redis inside the container, application and on Windows, this is what currently works just as I wanted for my application and database:
.devcontainer.json:
{
"name": "Node.js, TypeScript, PostgreSQL & Redis",
"dockerComposeFile": "docker-compose.yml",
"service": "akira",
"workspaceFolder": "/workspace",
"settings": {
"typescript.tsdk": "node_modules/typescript/lib",
"sqltools.connections": [
{
"name": "Container database",
"driver": "PostgreSQL",
"previewLimit": 50,
"server": "database",
"port": 5432,
"database": "akira",
"username": "ailuropoda",
"password": "melanoleuca"
}
],
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll": true
}
},
"extensions": [
"aaron-bond.better-comments",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"mtxr.sqltools",
"mtxr.sqltools-driver-pg",
"redhat.vscode-yaml"
],
"forwardPorts": [5432],
"postCreateCommand": "npm install",
"remoteUser": "node"
}
docker-compose.yml:
version: "3.8"
services:
akira:
build:
context: .
dockerfile: Dockerfile
command: sleep infinity
env_file: .env
volumes:
- ..:/workspace:cached
database:
image: postgres:latest
restart: unless-stopped
environment:
POSTGRES_USER: ailuropoda
POSTGRES_DB: akira
POSTGRES_PASSWORD: melanoleuca
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:alpine
tty: true
ports:
- 6379:6379
volumes:
pgdata:
Dockerfile:
ARG VARIANT="16-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:0-${VARIANT}
As you can see I already tried to achieve what I wanted to using networks but without success, my question is: How can I add Redis to my services while still being able to connect redis and database inside the application and on Windows?
Switch all non-dev containers to network_mode: service:akira
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- ../..:/workspace:cached
command: sleep infinity
postgresql:
image: postgres:14.1
network_mode: service:akira
restart: unless-stopped
volumes:
- ../docker/volumes/postgresql:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: pornapp
redis:
image: redis
network_mode: service:akira
restart: unless-stopped
volumes:
- ../docker/volumes/redis:/data
It seems this was the original configuration :
https://github.com/microsoft/vscode-dev-containers/pull/523
But it was reverted back because if you rebuild the dev container while others servies are running, the port forwarding will break :
https://github.com/microsoft/vscode-dev-containers/issues/537
If you're using Docker on WSL, I found that I can often not connect when the process is listening on ::1, but when explicitly binding the port to 127.0.0.1 makes the service accessible through Windows.
So something like
ports:
- 127.0.0.1:5432:5432
might work
Delete all of the network_mode: settings. Compose will use the default network_mode: bridge. You'll be able to communicate between containers using their Compose service names as host names, as described in Networking in Compose in the Docker documentation.
version: "3.8"
services:
akira:
build: .
env_file: .env
environment:
PGHOST: database
database:
image: postgres:latest
...
In SO questions I frequently see trying to use network_mode: to make other things appear as localhost. That host name is incredibly context-sensitive; if you asked my laptop, one of the Stack Overflow HTTP servers, your application container, or your database container who localhost is, they'd each independently say "well I am of course" but referring to a different network context. network_mode: service:... sounds like you're trying to make the other container be localhost; in practice it's extremely unusual to use this.
You may need to change your application code to make settings like the database location configurable, depending on where they're running, and environment variables are an easy way to set this in Docker. For this particular example I've used the $PGHOST variable the standard PostgreSQL client libraries use; in a Typescript/Node context you may need to change your code to refer to process.env.SOME_HOSTNAME instead of 'localhost'.

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

Docker Build Stuck on MariaDB Installation

I am trying to build a set of Docker images that includes an installation of Magento 2 and MariaDB. In rare cases it succeeds (although this could be due to small changes in the app), but in most cases it is stuck on the following:
magento2-db | Version: '10.3.11-MariaDB-1:10.3.11+maria~bionic' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
I see that someone else had this issue, but the cause was actual RUN commands for MariaDB installation, which I don't directly call. There doesn't seem to be anything in the log to indicate an error either.
The last lines in the log are:
[16:49:18.424][Moby ][Info ] [25693.252573] br-83922f7da47b: port 2(vethac51834) entered blocking state
[16:49:18.453][Moby ][Info ] [25693.290035] br-83922f7da47b: port 2(vethac51834) entered forwarding state
[16:49:18.637][ApiProxy ][Info ] time="2018-11-28T16:49:18+02:00" msg="proxy << POST /v1.25/containers/67175238f0e7a75ef527dbebbb1f5d992f1d01ee166643186dc5f727638aa66b/start (1.0560013s)\n"
[16:49:18.645][ApiProxy ][Info ] time="2018-11-28T16:49:18+02:00" msg="proxy >> GET /v1.25/events?filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dmagento2%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D\n"
It seems to actually finish executing all steps in the Dockerfile, but I suspect there might be a problem in my docker-compose file, which looks like this:
version: '3.0'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
container_name: 'magento-2.2.6'
ports:
- "80:80"
volumes:
- magento2-test-env:/var/www/html/magento2 # will be mounted on /var/www/html
links:
- magento2-db
env_file:
- .docker/env
depends_on:
- magento2-db
magento2-db:
container_name: 'magento2-db'
image: mariadb:latest
ports:
- "9809:3306"
volumes:
- magento2-db-data:/var/lib/mysql/data
env_file:
- .docker/env
volumes:
magento2-db-data:
magento2-test-env:
external: true
Is there anything obviously wrong with my setup, and is there a good way to troubleshoot this, maybe look for something specific in the log?
maybe the way you're building your composer what's the problem.
Try use this one:
version: '3.0'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
container_name: 'magento-2.2.6'
ports:
- "80:80"
volumes:
- magento2-test-env:/var/www/html/magento2 # will be mounted on /var/www/html
links:
- magento2-db
env_file:
- .docker/env
depends_on:
- db
db:
container_name: 'magento2-db'
image: mariadb:latest
ports:
- "9809:3306"
volumes:
- /var/lib/mysql/data
env_file:
- .docker/env
volumes:
magento2-db-data:
magento2-test-env:
external: true
Avoid to use services names like 'blabla-something' if you need put a name use as container_name it'll be enough, and the db, links always should links in the services itself no in the containers name.
I hope this could help you.
Try setting -e MYSQL_INITDB_SKIP_TZINFO=1, refer to this issue.
e.g.
docker run -it --rm -e MYSQL_INITDB_SKIP_TZINFO=1 ... mariadb:10.4.8

parse-dashboard w/ docker-compose: unable to connect to server

I've configured a little cluster using docker-compose, consisting of parse-server, mongo and parse-dashboard:
version: "3"
services:
myappdb:
image: mongo
ports:
- 27017:27017
myapp-parse-server:
image: parseplatform/parse-server
environment:
- PARSE_SERVER_MASTER_KEY=xxxx
- PARSE_SERVER_APPLICATION_ID=myapp
- VERBOSE=0
- PARSE_SERVER_DATABASE_URI=mongodb://myappdb:27017/dev
- PARSE_SERVER_URL=http://myapp-parse-server:1337/parse
depends_on:
- myappdb
ports:
- 5000:1337
parse-dashboard:
image: parseplatform/parse-dashboard
ports:
- 5001:4040
environment:
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=1
- PARSE_DASHBOARD_SERVER_URL=http://myapp-parse-server:1337/parse
- PARSE_DASHBOARD_APP_ID=myapp
- PARSE_DASHBOARD_MASTER_KEY=xxxx
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=xxxx
Try as I might, however, I cannot get the deployed parse-dashboard to connect to the myapp-parse-server. After I log in to the dashboard using my browser (at localhost:5001), the dashboard app informs me that it is 'unable to connect to server'.
I've tried pinging the host 'myapp-parse-server' from the parse-dashboard container, and it can see the container just fine. Similarly, it can see the endpoint http://myapp-parse-server:1337/parse; wget returns the expected 403.
If I use a copy of parse-dashboard running on my host machine, it works just fine against http://localhost:5000/parse. So the forwarded port from my host to the parse-server works.
I've also tried configuring dashboard using parse-dashboard-config.json mounted into the container. Yields exactly the same result.
I'm at a loss as to what I'm doing wrong here. Can anybody shed some light on this?
It looks like you have some issues with your docker-compose file:
PARSE_SERVER_URL is pointing to myapp-parse-server and it should point to http://localhost:1337/parse instead (unless you modified your hosts file on the container somehow but I don't see it)
Your myapp-parse-server should link to your database using links
here is an example of a docker-compose file from one blog that I wrote on how to deploy parse-server to google container enginee
version: "2"
services:
# Node.js parse-server application image
app:
build: ./app
command: npm start -- /parse-server/config/config.json
container_name: my-parse-app
volumes:
- ./app:/parse-server/
- /parse-server/node_modules
ports:
- "1337:1337"
links:
- mongo
# MongoDB image
mongo:
image: mongo
container_name: mongo-database
ports:
- "27017:27017"
volumes_from:
- mongodata
# MongoDB image volume for persistence
mongodata:
image: mongo
volumes:
- ./data/db:/data/db
command:
- --break-mongo
You can see from the example above that I use links and also create and attach volume for the database disk.
Also, I personally think that it's better to run parse-server with config file in order to decouple all configurations so my configuration file looks like the following (in my docker compose you can see that I'm running parse server with config file and not with env variables)
{
"databaseURI": "mongodb://localhost:27017/my-db",
"appId": "myAppId",
"masterKey": "myMasterKey",
"serverURL": "http://localhost:1337/parse",
"cloud": "./cloud/main.js",
"mountPath": "/parse",
"port": 1337
}
Finally, In my parse-dashboard image I also use config file and simply create it as volume and replace the default config file with my own. Because this step was not covered in my blogs your final docker-compose file should look like the following:
version: "2"
services:
# Node.js parse-server application image
app:
build: ./app
command: npm start -- /parse-server/config/config.json
container_name: my-parse-app
volumes:
- ./app:/parse-server/
- /parse-server/node_modules
ports:
- "1337:1337"
links:
- mongo
# MongoDB image
mongo:
image: mongo
container_name: mongo-database
ports:
- "27017:27017"
volumes_from:
- mongodata
# MongoDB image volume for persistence
mongodata:
image: mongo
volumes:
- ./data/db:/data/db
command:
- --break-mongo
dashboard:
image: parseplatform/parse-dashboard:1.1.0
volumes:
- ./dashboard/dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json
environment:
PORT: 4040
PARSE_DASHBOARD_ALLOW_INSECURE_HTTP: 1
ALLOW_INSECURE_HTTP: 1
MOUNT_PATH: "/parse"
And the parse-dashboard.json (the config file) should be:
{
"apps": [
{
"serverURL": "http://localhost:3000/parse",
"appId": "myMasterKey",
"masterKey": "myMasterKey",
"appName": "My App"
}
],
"users": [
{
"user": "myuser",
"pass": "mypassword"
}
],
"useEncryptedPasswords": false
}
I know that it's a little bit long so I really encourage you to read the series of blogs.
Hope it will help you.

Resources