I created a custom configuration for docker daemon, here is the config
{
"bip" : "192.168.1.32/24",
"default-address-pools" : [
{
"base" : "172.16.200.0/24",
"size" : 24
},
{
"base" : "172.16.25.0/24",
"size" : 24
}
],
"debug" : true,
"hosts" : ["tcp://127.0.0.69:4269", "unix:///var/run/docker.sock"],
"default-gateway" : "192.168.1.1",
"dns" : ["8.8.8.8", "8.8.4.4"],
"experimental" : true,
"log-driver" : "json-file",
"log-opts" : {
"max-size" : "20m",
"max-file" : "3",
"labels" : "develope_status",
"env" : "developing"
}
}
bip is my host IP address and default-gateway is my router gateway
, I created 2 address pools so it can assign IP address to the container
But during the build process, the image has no internet so it can't do apk update.
Here is my docker-compose file
version: "3"
services:
blog:
build: ./
image: blog:1.0
ports:
- "456:80"
- "450:443"
volumes:
- .:/blog
- ./logs:/var/log/apache2
- ./httpd-ssl.conf:/usr/local/apache2/conf/extra/httpd-ssl.conf
container_name: blog
hostname: alpine
command: apk update
networks:
default:
external:
name: vpc
Here is the Dockerfile
FROM httpd:2.4-alpine
ENV REFRESHED_AT=2021-02-01 \
APACHE_RUN_USER=www-data \
APACHE_RUN_GROUP=www-data \
APACHE_LOG_DIR=/var/log/apache2 \
APACHE_PID_FILE=/var/run/apache2.pid \
APACHE_RUN_DIR=/var/run/apache2 \
APACHE_LOCK_DIR=/var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR \
&& sed -i \
-e 's/^#\(Include .*httpd-ssl.conf\)/\1/' \
-e 's/^#\(LoadModule .*mod_ssl.so\)/\1/' \
-e 's/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/' \
conf/httpd.conf \
&& echo "Include /blog/blog.conf" >> /usr/local/apache2/conf/httpd.conf
VOLUME ["/blog", "$APACHE_LOG_DIR"]
EXPOSE 80 443
The running container was able to ping google and do apk update like normal but if I put RUN apk update inside the dockerfile it won't update.
Any help would be great, thank you
Related
I have a docker image in ECR, which is used for my ECS task. The task spins up and runs for a couple of minutes. Then it shuts down, after reporting the following error:
2021-11-07 00:00:58npm ERR! A complete log of this run can be found in:
2021-11-07 00:00:58npm ERR! /home/node/.npm/_logs/2021-11-07T00_00_58_665Z-debug.log
2021-11-07 00:00:58npm ERR! signal SIGTERM
2021-11-07 00:00:58npm ERR! command sh -c node bin/www
2021-11-07 00:00:58npm ERR! command failed
2021-11-07 00:00:58npm ERR! path /usr/src/app
2021-11-06 23:59:25> my-app#0.0.0 start
2021-11-06 23:59:25> node bin/www
My Dockerfile looks like:
LABEL maintainer="my-team"
LABEL description="App for AWS ECS"
EXPOSE 8080
WORKDIR /usr/src/app
RUN chown -R node:node /usr/src/app
RUN apk add bash
RUN apk add openssl
COPY --chown=node src/package*.json ./
USER node
ARG NODE_ENV=dev
ENV NODE_ENV ${NODE_ENV}
RUN npm ci
COPY --chown=node ./src/generate-cert.sh ./
RUN ./generate-cert.sh
COPY --chown=node src/ ./
ENTRYPOINT ["npm","start"]
My package.json contains:
{
"name": "my-app",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www",
"test": "jest --coverage"
},
The app is provisioned using terraform, with the following task definition:
resource "aws_ecs_task_definition" "task_definition" {
family = "dataviz_task"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
task_role_arn = aws_iam_role.dataviz_ecs_role.arn
execution_role_arn = aws_iam_role.dataviz_ecs_task_execution_role.arn
container_definitions = jsonencode([{
entryPoint : [
"npm",
"start"
],
environment : [
{ "name" : "ENV", "value" : local.container_environment }
]
essential : true,
image : "${var.account_id}${var.ecr_image_address}:latest",
lifecycle : {
ignore_changes : "image"
}
logConfiguration : {
"logDriver" : "awslogs",
"options" : {
"awslogs-group" : var.log_stream_name,
"awslogs-region" : var.region,
"awslogs-stream-prefix" : "ecs"
}
},
name : local.container_name,
portMappings : [
{
"containerPort" : local.container_port,
"hostPort" : local.host_port,
"protocol" : "tcp"
}
]
}])
}
My application runs locally in docker, but not when using the same image in AWS ECS.
To run locally, I uses a Make command make restart
which runs this from my Makefile:
build:
#docker build \
--build-arg NODE_ENV=local \
--tag $(DEV_IMAGE_TAG) \
. > /dev/null
.PHONY: package
package:
#docker build \
--tag $(PROD_IMAGE_TAG) \
--build-arg NODE_ENV=production \
. > /dev/null
.PHONY: start
start: build
#docker run \
--rm \
--publish 8080:8080 \
--name $(IMAGE_NAME) \
--detach \
--env ENV=local \
$(DEV_IMAGE_TAG) > /dev/null
.PHONY: stop
stop:
#docker stop $(IMAGE_NAME) > /dev/null
.PHONY: restart
restart:
ifeq ($(shell (docker ps | grep $(IMAGE_NAME))),)
#make start > /dev/null
else
#make stop > /dev/null
#make start > /dev/null
endif
Why does my docker image fail when running as task in AWS ECS (Fargate)?
I set up a Laravel dev environment using Docker - nginx:stable-alpine, php:8.0-fpm-alpine and mysql:5.7.32. I install Xdebug from my php.dockerfile:
RUN apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& apk del pcre-dev ${PHPIZE_DEPS}
And include two volumes in docker-compose to point php to xdebug.ini and error_reporting.ini:
volumes:
- .:/var/www/html
- ../docker/php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
- ../docker/php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini
My xdebug.ini looks like this:
zend_extension=xdebug
[xdebug]
xdebug.mode=develop,debug,trace,profile,coverage
xdebug.start_with_request = yes
xdebug.discover_client_host = 0
xdebug.remote_connect_back = 1
xdebug.client_port = 9003
xdebug.remote_host='host.docker.internal'
xdebug.idekey=VSCODE
When I return phpinfo() I can see that everything looks set up correctly, showing xdebug version 3.0.4 is installed, but when I set a breakpoint in VSCode and run the debugger its not hitting it.
My launch.json looks like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "XDebug Docker",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/var/www/html": "${workspaceFolder}/src",
}
},
]
}
My folder structure looks like this:
- Project
-- /docker
-- nginx.dockerfile
-- php.dockerfile
--- /nginx
--- /certs
--- default.conf
--- /php
---- /conf.d
---- error_reporting.ini
---- xdebug.ini
-- /src (the laravel app)
Xdebug 3 has changed the names of settings. Instead of xdebug.remote_host, you need to use xdebug.client_host, as per the upgrade guide: https://xdebug.org/docs/upgrade_guide#changed-xdebug.remote_host
xdebug.remote_connect_back has also been renamed in favour of xdebug.discover_client_host, but when using Xdebug with Docker, you should leave that set to 0.
I have a problem with write a dockerfile for my application. My code is beloy:
# define a imagem base
FROM ubuntu:latest
# define a owner image
LABEL maintainer="MyCompany"
# Update a image with packages
RUN apt-get update && apt-get upgrade -y
# Expose port 80
EXPOSE 8089
# Command to start my docker compose file
CMD ["docker-compose -f compose.yaml up -d"]
# Command to link KafkaConnect with MySql (images in docker compose file)
CMD ["curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json"
localhost:8083/connectors/ -d "
{ \"name\": \"inventory-connector\",
\"config\": {
\"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\",
\"tasks.max\": \"1\",
\"database.hostname\": \"mysql\",
\"database.port\": \"3306\",
\"database.user\": \"debezium\",
\"database.password\": \"dbz\",
\"database.server.id\": \"184054\",
\"database.server.name\": \"dbserver1\",
\"database.include.list\": \"inventory\",
\"database.history.kafka.bootstrap.servers\": \"kafka:9092\",
\"database.history.kafka.topic\": \"dbhistory.inventory\"
}
}"]
I know there can only be one CMD inside the dockerfile file.
How do I run my compose file and then make a cURL call?
You need to use RUN command for this. Check this answer for the difference between RUN and CMD.
If your second CMD is the final command inside the Dockerfile, then just change the following line:
# Command to start my docker compose file
RUN docker-compose -f compose.yaml up -d
If you have more commands to run after the CMDs you have now, try the below:
# Command to start my docker compose file
RUN docker-compose -f compose.yaml up -d
# Command to link KafkaConnect with MySql (images in docker compose file)
RUN curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json"
localhost:8083/connectors/ -d "
{ \"name\": \"inventory-connector\",
\"config\": {
\"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\",
\"tasks.max\": \"1\",
\"database.hostname\": \"mysql\",
\"database.port\": \"3306\",
\"database.user\": \"debezium\",
\"database.password\": \"dbz\",
\"database.server.id\": \"184054\",
\"database.server.name\": \"dbserver1\",
\"database.include.list\": \"inventory\",
\"database.history.kafka.bootstrap.servers\": \"kafka:9092\",
\"database.history.kafka.topic\": \"dbhistory.inventory\"
}
}"
# To set your ENTRYPOINT at the end of the file, uncomment the following line
# ENTRYPOINT ["some-other-command-you-need", "arg1", "arg2"]
Here's another option - run the curl from within the Kafka Connect container that you're creating. It looks something like this:
kafka-connect:
image: confluentinc/cp-kafka-connect-base:6.2.0
container_name: kafka-connect
depends_on:
- broker
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
command:
- bash
- -c
- |
#
echo "Installing connector plugins"
confluent-hub install --no-prompt debezium/debezium-connector-mysql:1.5.0
#
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
echo "Waiting for Kafka Connect to start listening on localhost ⏳"
while : ; do
curl_status=$$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors)
echo -e $$(date) " Kafka Connect listener HTTP state: " $$curl_status " (waiting for 200)"
if [ $$curl_status -eq 200 ] ; then
break
fi
sleep 5
done
echo -e "\n--\n+> Creating Data Generator source"
curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/connectors/inventory-connector/config \
-d '{
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "mysql",
"database.port": "3306",
"database.user": "debezium",
"database.password": "dbz",
"database.server.id": "184054",
"database.server.name": "dbserver1",
"database.include.list": "inventory",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "dbhistory.inventory"
}'
sleep infinity
You can see the full Docker Compose here
Introduction
I am setting up a project where we try to use docker for everything.
It's php(symfony) + npm project. We have working and battle-tested (we are using this setup for more than a year on several projects) docker-compose.yaml.
But to make it for developers more friendly, I came up with setting up bin-docker folder, that is, using direnv, placed first in the user's PATH
/.envrc:
export PATH="$(pwd)/bin-docker:$PATH"
Folder contains files, that are supposed to replace bin files with the in-docker ones
❯ tree bin-docker
bin-docker
├── _tty.sh
├── composer
├── npm
├── php
└── php-xdebug
E.g.php file contains:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
if [ $(docker-compose ps php | grep Up | wc -l) -gt 0 ]; then
docker_compose_exec \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php php "$#"
else
docker_compose_run \
--entrypoint=/usr/local/bin/php \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php "$#"
fi
npm:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
docker_run --init \
--entrypoint=npm \
-v "$PROJECT_ROOT":"$PROJECT_ROOT" \
-w "$(pwd)" \
-u "$(id -u "$USER"):$(id -g "$USER")" \
mangoweb/mango-cli:v2.3.2 "$#"
It works great, you can simply use symfony's bin/console and it will "magically" run in the docker-container.
The problem
The only problem and my question is, how to properly map host user to container's user. Properly for all major OS (macOS, Windows(WSL), Linux) because our developers use all of them. I will talk about the npm, because it uses public image anyone can download.
When I do not map user at all, on Linux the files create in mounted volume are the owned by root, and users have to chmod the files afterwards. Not ideal at all.
When I use -u "$(id -u "$USER"):$(id -g "$USER")" it break's because the in-container user now doesn't have rights to create cache folder in container, also on macOS standard UID is 501, which breaks everything.
What is the proper way to map the user, or is there any other better way to do any part of this setup?
Attachments:
docker-compose.yaml: (It's shortened from sensitive or non-important info)
version: '2.4'
x-php-service-base: &php-service-base
restart: on-failure
depends_on:
- redis
- elasticsearch
working_dir: /src
volumes:
- .:/src:cached
environment:
APP_ENV: dev
SESSION_STORE_URI: tcp://redis:6379
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
environment:
discovery.type: single-node
xpack.security.enabled: "false"
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
redis:
image: redis:4.0.8-alpine
php:
<<: *php-service-base
image: custom-php-image:7.2
php-xdebug:
<<: *php-service-base
image: custom-php-image-with-xdebug:7.2
nginx:
image: custom-nginx-image
restart: on-failure
depends_on:
- php
- php-xdebug
_tty.sh: Only to properly pass tty status to docker run
if [ -t 1 ]; then
DC_INTERACTIVITY=""
else
DC_INTERACTIVITY="-T"
fi
function docker_run {
if [ -t 1 ]; then
docker run --rm --interactive --tty=true "$#"
else
docker run --rm --interactive --tty=false "$#"
fi
}
function docker_compose_run {
docker-compose run --rm $DC_INTERACTIVITY "$#"
}
function docker_compose_exec {
docker-compose exec $DC_INTERACTIVITY "$#"
}
This may answer your problem.
I came across a tutorial as to how to do setup user namespaces in Ubuntu. Note the use case in the tutorial is for using nvidia-docker and restricting permissions. In particular Dr. Kinghorn states in hist post:
The main idea of a user-namespace is that a processes UID (user ID) and GID (group ID) can be different inside and outside of a containers namespace. The significant consequence of this is that a container can have it's root process mapped to a non-privileged user ID on the host.
Which sounds like what you're looking for. Hope this helps.
I have a Docker container running Bind9.
Inside the container named is running with bind user
bind 1 0 0 19:23 ? 00:00:00 /usr/sbin/named -u bind -g
In my named.conf.local I have
channel queries_log {
file "/var/log/bind/queries.log";
print-time yes;
print-category yes;
print-severity yes;
severity info;
};
category queries { queries_log; };
After starting the container, the log file is created
-rw-r--r-- 1 bind bind 0 Nov 14 19:23 queries.log
but it always remains empty.
On the other side, the 'queries' logs are still visible using docker logs ...
14-Nov-2018 19:26:10.463 client #0x7f179c10ece0 ...
Using the same config without Docker works fine.
My docker-compose.yml
version: '3.6'
services:
bind9:
build: .
image: bind9:1.9.11.3
container_name: bind9
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- ./config/named.conf.options:/etc/bind/named.conf.options
- ./config/named.conf.local:/etc/bind/named.conf.local
My Dockerfile
FROM ubuntu:18.04
ENV BIND_USER=bind \
BIND_VERSION=1:9.11.3
RUN apt-get update -qq \
&& DEBIAN_FRONTEND=noninteractive apt-get --no-install-recommends install -y \
bind9=${BIND_VERSION}* \
bind9-host=${BIND_VERSION}* \
dnsutils \
&& rm -rf /var/lib/apt/lists/*
COPY entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
CMD ["/usr/sbin/named"]
-f
Run the server in the foreground (i.e. do not daemonize).
-g
Run the server in the foreground and force all logging to stderr.
Try to use -f instead of -g.