I am using a Gitlag Server and got 2 gitlab-runners (one on my local and one on a VServer) - both work perfectly with echo and simple stuff like building a ubuntu server with mysql and php
stages:
- dbserver
- deploy
build:
stage: dbserver
image: ubuntu:16.04
services:
- mysql:5.7
- php:7.0
variables:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: test2
script:
- apt-get update -q && apt-get install -qqy --no-install-recommends
mysql-client
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --
host=mysql < test.sql
I want to import a DB now, but I do not get the idea or the technic behind it. How do I import a .sql file lying on my local PC or server? Do I need to create a DOCKERFILE by myself or can I do that just with the gitlab.yml file?
You can use scp to copy the .sql file to the runner.
You may need to add the commands to install openssh-client e.g.:
script:
apt-get update -y && apt-get install openssh-client -y
and then just add the scp line before invoking mysql, e.g.:
- scp user#server:/path/to/file.sql /tmp/temp.sql
- mysql --user=root --password=\"$MYSQL_ROOT_PASSWORD\" --host=mysql < /tmp/temp.sql
I found a solution which binds a directory from the gitlab-runner machine to the actual container I am using:
sudo nano /etc/gitlab-runner/config.toml
there you change the volumes to something like this
volumes = ["/home/ubuntu/test:/cache"]
/home/ubuntu/test is the directory from the machine and /cache the one from the container
Before you do so I recommend to stop the runner and then start it again
Related
So I have simple hello world .net core application setup on my local machine running on docker container using docker-compose
The problem is when I tried to attach debugger from vs2019 using Debug -> Attach to Process -> ConnectionType Docker(Linux Container) -> select the process and hit attach.
I got error stated
Failed to launch debug adapter 'coreclr'.
Failed to copy files.
Initialization log:
Determining user folder on remote system...
Checking for existing installation of debugging tools...
Downloading debugger launcher...
Creating debugger installation folder: /root/.vs-debugger
Copying debugger launcher to /root/.vs-debugger/GetVsDbg.sh
Failed: Failed to copy files.
The program '[360] bash' has exited with code -1 (0xffffffff).
It seems for some reason the visual studio tried to copy the debugger to the running container but failed
here's the simple dockerfile and docker-compose script
Dockerfile
FROM microsoft/aspnetcore-build:1.1.2
RUN apt-get update && apt-get install -y unzip
RUN curl -sSL \
https://aka.ms/getvsdbgsh | bash /dev/stdin -v vs2019 -l /root/.vs-debugger
COPY node_modules/wait-for-it.sh/bin/wait-for-it /tools/wait-for-it.sh
RUN chmod +x /tools/wait-for-it.sh
ENV DBHOST=dev_mysql WAITHOST=dev_mysql WAITPORT=3306
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
EXPOSE 80/tcp
VOLUME /app
WORKDIR /app
ENTRYPOINT dotnet restore \
&& /tools/wait-for-it.sh $WAITHOST:$WAITPORT --timeout=0 \
&& dotnet watch run --environment=Development
docker-compose.yml
version: "3"
volumes:
productdata:
networks:
backend:
services:
mysql:
image: "mysql:8.0.0"
volumes:
- productdata:/var/lib/mysql
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=mysecret
- bind-address=0.0.0.0
mvc:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/obj
- /app/bin
- ~/.nuget:/root/.nuget
- /root/.nuget/packages/.tools
ports:
- 3000:80
networks:
- backend
environment:
- DBHOST=mysql
- WAITHOST=mysql
depends_on:
- mysql
Note:
- I have already tick the Shared Drive on the docker host
any clue about this?
I have a work around -
Note that you can only do this once per running container, and you have to do it again if the container is re-deployed with new code.
Find the container in Visual Studio
Right-click on the container and select “Open terminal”
Inside the terminal that has opened, run the following commands in this order:
apt-get update
apt-get install procps -y
apt-get install wget -y
mkdir /root/.vs-debugger
curl -sSL https://aka.ms/getvsdbgsh -o ‘/root/.vs-debugger/GetVsDbg.sh’
bash /root/.vs-debugger/GetVsDbg.sh -v latest -l /vsdbg
Change the build mode in VS to Release
Attach the debugger
You can also run all the commands 4-9 in one go by using the following command:
apt-get update && apt-get install procps -y && apt-get install wget -y && mkdir /root/.vs-debugger && curl -sSL https://aka.ms/getvsdbgsh -o ‘/root/.vs-debugger/GetVsDbg.sh’ && bash /root/.vs-debugger/GetVsDbg.sh -v latest -l /vsdbg
It seems for some reason visual studio didn't have access to the docker-users group in spite I already add my current user account to the group
The workaround is to create new windows user and add it to the docker-users group.
It works like a charm
My Docker container builds fine on OSX:
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
But doesn't build on Amazon Linux:
Docker version 17.12.0-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
docker-compose version 1.20.1, build 5d8c71b
Full Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
Full docker-compose.yml
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
The command I'm running is: docker-compose up --build
The result on Amazon Linux is:
Running setup.py install for pymongo: started
Running setup.py install for pymongo: finished with status 'done'
Running setup.py install for pluggy: started
Running setup.py install for pluggy: finished with status 'done'
Running setup.py install for coverage: started
Running setup.py install for coverage: finished with status 'done'
Successfully installed Faker-0.8.12 Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 astroid-1.6.2 attrs-17.4.0 backports.functools-lru-cache-1.5 beautifulsoup4-4.5.1 click-6.7 configparser-3.5.0 coverage-4.5.1 enum34-1.1.6 funcsigs-1.0.2 futures-3.2.0 gunicorn-19.7.1 ipaddress-1.0.19 isort-4.3.4 itsdangerous-0.24 lazy-object-proxy-1.3.1 mccabe-0.6.1 more-itertools-4.1.0 pluggy-0.6.0 py-1.5.3 py4j-0.10.6 pylint-1.8.3 pymongo-3.6.1 pytest-3.5.0 pytest-cov-2.5.1 python-dateutil-2.7.2 singledispatch-3.4.0.3 six-1.11.0 text-unidecode-1.2 wrapt-1.10.11
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
npm WARN deprecated redux-mock-store#1.5.1: breaking changes in minor version
> base62#1.2.7 postinstall /lts-app/node_modules/base62
> node scripts/install-stats.js || exit 0
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r "requirements.txt" && npm install --no-optional && npm run build' returned a non-zero code: 1
Does anyone know what might be causing this discrepancy? The error message from Docker doesn't give many clues. I'd be very grateful for any ideas others can offer!
To solve this problem, I followed #MazelTov's advice and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.
Hi there I am new to Docker. I have an docker-compose.yml which looks like this:
version: "3"
services:
lmm-website:
image: lmm/lamp:php${PHP_VERSION:-71}
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
ntw:
I want to install the Yarn package manager from within the docker-compose file:
sudo apt-get update && sudo apt-get install yarn
I could not figure out how to declare this, I have tried
command: supervisord -n && sudo apt-get update && sudo apt-get install yarn
which fails silently. How do I declare this correctly? Or is docker-compose.yml the wrong place for this?
Why not use Dockerfile which is specifically designed for this task?
Change your "image" property to "build" property to link a Dockerfile.
Your docker-compose.yml would look like this:
services:
lmm-website:
build:
context: .
dockerfile: Dockerfile
container_name: ${CONTAINER_NAME:-lmm-website}
environment:
HOME: /home/user
command: supervisord -n
volumes:
- ..:/builds/lmm/website
- db_website:/var/lib/mysql
ports:
- 8765:80
- 12121:443
- 3309:3306
networks:
- ntw
volumes:
db_website:
networks:
Then create a text file named Dockerfile in the same path as docker-compose.yml with the following content:
FROM lmm/lamp:php${PHP_VERSION:-71}
RUN apt-get update && apt-get install -y bash
You can add as much SO commands as you want using Dockerfile's RUN (cp, mv, ls, bash...), apart from other Dockerfile capabilities like ADD, COPY, etc.
+info:
https://docs.docker.com/engine/reference/builder/
+live-example:
I made a github project called hello-docker-react. As it's name says is a docker-react box, and can serve you as an example as I am installing yarn plus other tools using the procedure I explained above.
In addition to that, I also start yarn using an entrypoint bash script linked to docker-compose.yml file using docker-compose entrypoint property.
https://github.com/lopezator/hello-docker-react
You can only do it with a Dockerfile, because the command operator in docker-compose.yml only keeps the container alive during the time the command is executed, and then it stops.
Try this
command: supervisord -n && apt-get update && apt-get install yarn
Because sudo doesn't work in docker.
My first time trying to help out:
would like you to give it a try (I found it on the internet)
FROM lmm/lamp:php${PHP_VERSION:-71}
USER root
RUN apt-get update && apt-get install -y bash
I am pretty new to docker, and I am trying to make a container with multiple apps.
Let say that my docker-compose file is like this :
version: '2'
services:
myapp:
build: ./dockerfiles/myapp
volumes:
- ./www:/var/www
- ./logs:/var/log
- ./mysql-data:/var/lib/mysql
- ./php:/etc/php5
- ./nginx:/etc/nginx
ports:
- "8082:8000"
- "6606:3306"
links:
- mysql:mysql
- php:php
- nginx:nginx
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: M#yW3Bw35t
MYSQL_USER: replymwp
MYSQL_PASSWORD: ZSzLPoOi9wlhFaiJ
php:
image: php:5.6-fpm
links:
- mysql:db
nginx:
image: nginx
links:
- php:php
Now, in myapp DockerFile, I want to install a package that needs mysql.
FROM debian:jessie
RUN apt-get update
RUN apt-get update
RUN apt-get install -y apt-show-versions
RUN apt-get install -y wget
RUN wget http://repo.ajenti.org/debian/key -O- | apt-key add -
RUN echo "deb http://repo.ajenti.org/ng/debian main main ubuntu" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y ajenti
RUN apt-get install -y ajenti-v ajenti-v-ftp-vsftpd ajenti-v-php-fpm ajenti-v-mysql
EXPOSE 8000
ENTRYPOINT ["ajenti-panel"]
Now the problem is, when docker try to build my image, it install php, mysql etc... even if I link it in my docker-compose file. And secondly, when it try to install mysql, It prompt for a master password and stay blocked at this step, even if I fill something...
Maybe am I totally wrong in my way of using it?
Any help would be appreciate.
I suppose your ajenti has a dependency on mysql, so if you do apt-get install ajenti, it tries to satisfy that dependency. Specifically you are installing ajenti-v-mysql, which does seem to have a mysql dependency
Because you want to run mysql seperate, you might need to do --no-install-recommends ? This is a flag voor apt-get, so you'd get something like
apt-get install <packagename> --no-install-recommends
This would mean you get NO dependencies, so you might need to figure out which other depenencies you need.
The php-fpm has the same issue, I suppose that whole line which includes ajenti-v-php-fpm is a bit too much?
If you're planning on using separate mysql and php containers, then why are you still including the installation in the mpapp dockerfile on this line:
RUN apt-get install -y ajenti-v ajenti-v-ftp-vsftpd ajenti-v-php-fpm ajenti-v-mysql
If you're going to use mysql and php containers then you don't need them in your app. This should also take care of your second problem about being prompted for mysql password.
Keep in mind that you will need to change the hostnames of mysql and your php configuration from your myapp configuration. I think you might be better off looking for a tutorial for setting up docker compose, you'll have to look yourself to find the most suitable but something like this would give you a good start.
I developp a php symfony project and I use gitlab.
I want to build the unit test on gitlab, for that I use gitlab-ci.yml file :
image: php:5.6
# Select what we should cache
cache:
paths:
- vendor/
before_script:
# Install git, the php image doesn't have installed
- apt-get update -yqq
- apt-get install git -yqq
# Install mysql driver
- docker-php-ext-install pdo_mysql
# Install composer
- curl -sS https://getcomposer.org/installer | php
# Install all project dependencies
- php composer.phar install
services:
- mysql
variables:
# Configure mysql service (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: hello_world_test
MYSQL_ROOT_PASSWORD: mysql
# We test PHP5.5 (the default) with MySQL
test:mysql:
script:
- phpunit --configuration phpunit_mysql.xml --coverage-text -c app/
It currently doesn't work because my hostname isn't resolved on the docker container.
I found the solution here : How can you make the docker container use the host machine's /etc/hosts file?
My question is : Where do I write the --net=host option ?
Thanks.
You need to use the network_mode parameter in the docker image itself, by editing the config.toml file as (somewhat poorly) described in the gitlab-runner advanced configuration docs.
you can also do it when you create the docker image:
gitlab-runner register --docker-network-mode 'host'
I don't believe you can set it directly from the gitlab yml file.