I am trying to run the concourse locally and I am using their documentation posted on :
https://concourse-ci.org/quick-start.html but when I am running the docker compose up and start download all resources at the end is showing this line:
concourse The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I have made a bit of research and some guys were suggesting that is needed to specify the platform but that didn't worked, so does anybody have an ideea about how to run concourse locally on macbook M1 chip?
Try running below docker-compose.yml file. this is running in my Mac M1
version: '3'
services:
concourse-db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: rdclda/concourse:7.5.0
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
# replace this with your external IP address
CONCOURSE_EXTERNAL_URL: http://localhost:8080
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
# instead of relying on the default "detect"
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
CONCOURSE_X_FRAME_OPTIONS: allow
CONCOURSE_CONTENT_SECURITY_POLICY: "*"
CONCOURSE_CLUSTER_NAME: arm64
CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: "8.8.8.8"
CONCOURSE_WORKER_RUNTIME: "houdini"
CONCOURSE_RUNTIME: "houdini"```
Related
I'm facing issue with my new Mac wit M1 Chip.
I use the same config as on my old mac where it worked:
version: '3'
services:
shop:
container_name: shop
image: dockware/dev:latest
ports:
- "22:22" # ssh
- "80:80" # apache2
- "443:443" # apache2 https
- "8888:8888" # watch admin
- "9998:9998" # watch storefront proxy
- "9999:9999" # watch storefront
- "3306:3306" # mysql port
volumes:
- "db_volume:/var/lib/mysql"
- "shop_volume:/var/www/html"
networks:
- web
environment:
- MYSQL_USER=shopware
- MYSQL_PWD=secret
- XDEBUG_ENABLED=0
rediscache:
image: redis:6.0
container_name: redis
networks:
- web
volumes:
db_volume:
driver: local
shop_volume:
driver: local
## ***********************************************************************
## NETWORKS
## ***********************************************************************
networks:
web:
external: false
The error i get is :
sudo: no tty present and no askpass program specified
I already pruned image and containers but still get this error.
On my research, i found solutions where i need to edit sudoer file or set permission, but it's docker image, so I can not use those solutions.
Anyone an idea why and how to solve that?
Please open a shell in the container (docker exec -it <container name> /bin/bash) and execute the entrypoint script manually.
You should see that prompt when running it manually.
Probably the setup or so is trying to ask for something interactively, which fails if there is no TTY.
I'm currently using Docker to deploy a development version of a web application. This is the docker-compose.yml file I wrote
version: '3'
services:
nginx:
build:
context: ./docker/nginx
ports:
- "80:80"
volumes:
- .:/var/www/html/acme.com
- ./docker/nginx/acme.com.conf:/etc/nginx/conf.d/acme.com.conf
networks:
- my_network
php:
build:
context: ./docker/php
ports:
- "9000:9000"
volumes:
- .:/var/www/html/acme.com
networks:
- my_network
database:
build:
context: ./docker/database
volumes:
- ./docker/database/acme.sql:/docker-entrypoint-initdb.d/acme.sql
- ./docker/database/remote_access.sql:/docker-entrypoint-initdb.d/remote_access.sql
- ./docker/database/custom.cnf:/etc/mysql/conf.d/custom.cnf
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: ${db_database}
MYSQL_ROOT_PASSWORD: ${db_password}
networks:
- my_network
networks:
my_network:
driver: bridge
Looking more closely at the different context files for nginx and mysql are as follows:
./docker/nginx/Dockerfile
FROM nginx:alpine
./docker/database/Dockerfile
FROM mariadb:latest
So it is evident that nginx image uses the alpine base image.
But what image is mariadb using? I went through the dockerhub website and followed the link to https://github.com/MariaDB/mariadb-docker/blob/db55d2702dfc0102364a29ab00334b6a02085ef9/10.7/Dockerfile
In this file, there is a reference to
FROM ubuntu:focal
Does this mean that my docker container is using the alpine linux base image as well as the ubuntu image? How does it work if I have multiple linux distributions in my container?
What should I do to fix this?
Should I rather install mariadb using a FROM command into alpine linux and build my own docker image?
Imagine your docker compose as a server farm. Each service (nginx, mariadb, ..) would be a physical server running an OS and its software. They are all connected via LAN within the same subnet. Each machine has its own IP address and there is a DNS and DHCP service running for giving the IPs and resolving names (service name = DNS-Alias). And there is a router blocking all connections from other subnets (= your host). Exceptions are configured by port mapping (=forwarding) ports: - 8000:8000.
So you can mix servers with all different OS variants of one type. This is due to the fact that docker is not a real virtual machine but more a VM light using the host OS resources to run the containers. So you can mix all kind of Linux distributions OR Windows versions. Every container uses the OS suiting it goals the best, e.g. Alpine for very small footprint and Ubuntu for much more comfort.
This is basically the same question as this one, except on a Mac, setting network to host has no effect whatsoever.
I'm trying to give a Docker container, running on MacOS, access to its host ARP table. My docker-compose.yaml:
services:
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant
environment:
# This is the required way to set a timezone on macOS and differs from the Linux compose file
- TZ=XX/XXXX
volumes:
- ./config:/config
restart: unless-stopped
privileged: true
ports:
# Also required for macOS since the network directive in docker-compose does not work
- "8123:8123"
# Add this or docker-compose will complain that it did not find the key for locally mapped volume
volumes:
config:
So I'm trying to setup some docker-compose scripts using some of the latest compose 3.5 features. The big one is being able to name networks. So there are some containers that I end up reproducing over and over again, such as a Postgres database, with various apps I work on. My goal is to be able to have a docker-compose script for something like Postgres, with a named network, which I can then have other apps with their own docker-compose scripts just add on to. It feels more re-usable that way.
Anyway, here is the docker-compose script I'm working with:
version: '3.5'
services:
postgres:
image: postgres:11
container_name: postgres
networks:
- pg_network
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=user
ports:
- 5432:5432
volumes:
- ~/.postgresql/data:/var/lib/postgresql/data
- ~/.postgresql/init:/docker-entrypoint-initdb.d
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
networks:
- pg_network
ports:
- 5433:80
environment:
- PGADMIN_DEFAULT_EMAIL=user#gmail.com
- PGADMIN_DEFAULT_PASSWORD=password
volumes:
- ~/.postgresql/pgadmin:/var/lib/pgadmin
networks:
pg_network:
name: pg_network
I am trying to use the "custom name" feature new in Docker Compose 3.5. See this link for the documentation: https://docs.docker.com/compose/networking/
However, when I try to run docker-compose up -d, I get the following error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
networks.pg_network value Additional properties are not allowed ('name' was unexpected)
I'm not sure what I'm doing wrong. I'm on Ubuntu 18.04, I'm running the docker commands with sudo (because that seems to be a requirement on Ubuntu).
Here are my Docker and Docker Compose versions:
Docker version 18.09.2, build 6247962
docker-compose version 1.17.1, build unknown
I've tried upgrading them with APT, but it says they are up to date.
So, either I'm making a mistake in my compose script, or there's something screwy with my Docker version. I would appreciate any insight that can be offered. Thank you.
I have the following docker-compose.yml file:
version: '2'
services:
postgis:
image: mdillon/postgis
environment:
POSTGRES_USER: ${POSTGIS_ENV_POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGIS_ENV_POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGIS_ENV_POSTGRES_DB}
volumes:
- /nexchange/database:/var/lib/postgresql/data
restart: always
app:
image: onitsoft/nexchange:${DOCKER_IMAGE_TAG}
volumes:
- /nexchange/mediafiles:/usr/share/nginx/html/media
- /nexchange/staticfiles:/usr/share/nginx/html/static
links:
- postgis
restart: always
web:
image: onitsoft/nginx
volumes:
- /nexchange/etc/letsencrypt:/etc/letsencrypt
- /nexchange/etc/nginx/ssl:/etc/nginx/ssl
- /nexchange/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- /nexchange/mediafiles:/usr/share/nginx/html/media
- /nexchange/staticfiles:/usr/share/nginx/html/static
ports:
- "80:80"
- "443:443"
links:
- app
restart: always
For some reason, some functionalities that work on the local container do not work on staging.
I would like to configure a remote interpreter in pycharm for staging, however it seems like this setup is not currently supported.
I am using wercker + docker compose and my IDE is pycharm.
EDIT:
The question is:
How to setup Pycharm debugger to run on a remote host running docker compose
The solution, however not secure, is open the docker API on the remote target to public traffic via iptables (possibly to traffic only from specific IP, if you posses a static IP).
$ ssh $USER#staging.nexchnage.ru
oleg#nexchange-staging:~# sudo iptables -A INPUT -p tcp --dport 2376 -j ACCEPT
oleg#nexchange-staging:~# sudo /etc/init.d/iptables restart
And then simply use the docker compose feature of JetBrain PyCharm / PhpStrom or you favourite choice:
Cheers