So I have simple hello world .net core application setup on my local machine running on docker container using docker-compose
The problem is when I tried to attach debugger from vs2019 using Debug -> Attach to Process -> ConnectionType Docker(Linux Container) -> select the process and hit attach.
I got error stated
Failed to launch debug adapter 'coreclr'.
Failed to copy files.
Initialization log:
Determining user folder on remote system...
Checking for existing installation of debugging tools...
Downloading debugger launcher...
Creating debugger installation folder: /root/.vs-debugger
Copying debugger launcher to /root/.vs-debugger/GetVsDbg.sh
Failed: Failed to copy files.
The program '[360] bash' has exited with code -1 (0xffffffff).
It seems for some reason the visual studio tried to copy the debugger to the running container but failed
here's the simple dockerfile and docker-compose script
Dockerfile
FROM microsoft/aspnetcore-build:1.1.2
RUN apt-get update && apt-get install -y unzip
RUN curl -sSL \
https://aka.ms/getvsdbgsh | bash /dev/stdin -v vs2019 -l /root/.vs-debugger
COPY node_modules/wait-for-it.sh/bin/wait-for-it /tools/wait-for-it.sh
RUN chmod +x /tools/wait-for-it.sh
ENV DBHOST=dev_mysql WAITHOST=dev_mysql WAITPORT=3306
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
EXPOSE 80/tcp
VOLUME /app
WORKDIR /app
ENTRYPOINT dotnet restore \
&& /tools/wait-for-it.sh $WAITHOST:$WAITPORT --timeout=0 \
&& dotnet watch run --environment=Development
docker-compose.yml
version: "3"
volumes:
productdata:
networks:
backend:
services:
mysql:
image: "mysql:8.0.0"
volumes:
- productdata:/var/lib/mysql
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=mysecret
- bind-address=0.0.0.0
mvc:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/obj
- /app/bin
- ~/.nuget:/root/.nuget
- /root/.nuget/packages/.tools
ports:
- 3000:80
networks:
- backend
environment:
- DBHOST=mysql
- WAITHOST=mysql
depends_on:
- mysql
Note:
- I have already tick the Shared Drive on the docker host
any clue about this?
I have a work around -
Note that you can only do this once per running container, and you have to do it again if the container is re-deployed with new code.
Find the container in Visual Studio
Right-click on the container and select “Open terminal”
Inside the terminal that has opened, run the following commands in this order:
apt-get update
apt-get install procps -y
apt-get install wget -y
mkdir /root/.vs-debugger
curl -sSL https://aka.ms/getvsdbgsh -o ‘/root/.vs-debugger/GetVsDbg.sh’
bash /root/.vs-debugger/GetVsDbg.sh -v latest -l /vsdbg
Change the build mode in VS to Release
Attach the debugger
You can also run all the commands 4-9 in one go by using the following command:
apt-get update && apt-get install procps -y && apt-get install wget -y && mkdir /root/.vs-debugger && curl -sSL https://aka.ms/getvsdbgsh -o ‘/root/.vs-debugger/GetVsDbg.sh’ && bash /root/.vs-debugger/GetVsDbg.sh -v latest -l /vsdbg
It seems for some reason visual studio didn't have access to the docker-users group in spite I already add my current user account to the group
The workaround is to create new windows user and add it to the docker-users group.
It works like a charm
Related
I am developing with Next.js on my Mac OS host using Docker running Ubuntu 18 and bind mounting my development directory to the container. I want to know if there is any way when using SWC Minify in Next.js 12+ to not have to copy over the whole host build context to Docker and then use a yarn install build step and a persistent volume to run a development environment in Docker because of the missing #next/swc-linux-x64-gnu binary.
Relevant part of docker-compose.yaml
# Next.js Client and Server
my-nextjs:
container_name: my-nextjs
build:
context: ./www
dockerfile: Dockerfile
volumes:
# bind mounting my development directory to /app in Docker
- ./www:/app/
ports:
- "3911:3000"
I know that because it will be running on Linux and not Mac, Next.js SWC Minify will need a different binary so I explicitly add that to my dependencies on my Host machine with Yarn 3 (set to nodeLinker: node-modules in .yarnrc)
But that does not install the package to my host node_modules folder even though it is zipped in the .yarn/cache folder and appears in my package.json dependencies. I'm not sure why, and I think it's the reason for this problem.
Dockerfile:
FROM ubuntu:18.04
# Install Node.js
RUN apt-get update
RUN apt-get -y install curl
RUN apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
RUN sudo apt-get update
RUN sudo apt-get install yarn
WORKDIR /app
EXPOSE 3000
ENV PORT 3000
ENV NEXT_TELEMETRY_DISABLED 1
CMD ["yarn", "dev"]
When I try run this container I get:
❯ docker compose up my-nextjs
[+] Running 1/1
⠿ Container my-nextjs Recreated 0.1s
Attaching to my-nextjs
my-nextjs | ready - started server on 0.0.0.0:3000, url: http://localhost:3000
my-nextjs | info - Loaded env from /app/.env
my-nextjs | warn - SWC minify release candidate enabled. https://nextjs.org/docs/messages/swc-minify-enabled
my-nextjs | info - Attempted to load #next/swc-linux-x64-gnu, but it was not installed
my-nextjs | info - Attempted to load #next/swc-linux-x64-gnux32, but it was not installed
my-nextjs | info - Attempted to load #next/swc-linux-x64-musl, but it was not installed
my-nextjs | error - Failed to load SWC binary for linux/x64, see more info here: https://nextjs.org/docs/messages/failed-loading-swc
my-nextjs exited with code 1
I have figured out a workaround for this... adding - /app/node_modules to the docker-compose volumes to allow the container to persist that folder and in the Dockerfile:
COPY . /app/
RUN yarn install
Which gets the correct binary and persists it in its own node_modules over the bind mounted node_modules folder. But I'm just trying to see if there's a way to do it without copying over the whole thing from the build context. I'm sure that if Yarn 3 would actually install #next/swc-linux-x64-gnu to node_modules on my host then this could be possible.
Any thoughts?
In <repoDir>/.yarnrc, add:
--ignore-platform true
(Valid for yarn 1. You might want to double-check for later versions.)
Then run:
yarn add --dev #next/swc-linux-x64-gnu
This will install the Linux version of SWC in your node_modules, regardless of the platform, and you'll be able to use that in docker containers.
Trade-offs:
you lose the platform check for packages you install in the future
it won't be updated automatically when you update Next, you'll have to do it manually
Is there a way to avoid rebuilding my Docker image each time I make a change in my source code ?
I think I have already optimize my Dockerfile enough to decrease building time, but it's always 2 commands and some waiting time for sometimes just one line of code added. It's longer than a simple CTRL + S and check the results.
The commands I have to do for each little update in my code:
docker-compose down
docker-compose build
docker-compose up
Here's my Dockerfile:
FROM python:3-slim as development
ENV PYTHONUNBUFFERED=1
COPY ./requirements.txt /requirements.txt
COPY ./scripts /scripts
EXPOSE 80
RUN apt-get update && \
apt-get install -y \
bash \
build-essential \
gcc \
libffi-dev \
musl-dev \
openssl \
wget \
postgresql \
postgresql-client \
libglib2.0-0 \
libnss3 \
libgconf-2-4 \
libfontconfig1 \
libpq-dev && \
pip install -r /requirements.txt && \
mkdir -p /vol/web/static && \
chmod -R 755 /vol && \
chmod -R +x /scripts
COPY ./files /files
WORKDIR /files
ENV PATH="/scripts:/py/bin:$PATH"
CMD ["run.sh"]
Here's my docker-compose.yml file:
version: '3.9'
x-database-variables: &database-variables
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ALLOWED_HOSTS: ${ALLOWED_HOSTS}
x-app-variables: &app-variables
<<: *database-variables
POSTGRES_HOST: ${POSTGRES_HOST}
SPOTIPY_CLIENT_ID: ${SPOTIPY_CLIENT_ID}
SPOTIPY_CLIENT_SECRET: ${SPOTIPY_CLIENT_SECRET}
SECRET_KEY: ${SECRET_KEY}
CLUSTER_HOST: ${CLUSTER_HOST}
DEBUG: 0
services:
website:
build:
context: .
restart: always
volumes:
- static-data:/vol/web
environment: *app-variables
depends_on:
- postgres
postgres:
image: postgres
restart: always
environment: *database-variables
volumes:
- db-data:/var/lib/postgresql/data
proxy:
build:
context: ./proxy
restart: always
depends_on:
- website
ports:
- 80:80
- 443:443
volumes:
- static-data:/vol/static
- ./files/templates:/var/www/html
- ./proxy/default.conf:/etc/nginx/conf.d/default.conf
- ./etc/letsencrypt:/etc/letsencrypt
volumes:
static-data:
db-data:
Mount your script files directly in the container via docker-compose.yml:
volumes:
- ./scripts:/scripts
- ./files:/files
Keep in mind you have to use a prefix if you use a WORKDIR in your Dockerfile.
Quickly answer
Is there a way to avoid rebuilding my Docker image each time I make a change in my source code ?
If your app needs a build step, you cannot skip it.
In your case, you can install the requirements before the python app, so on each source code modification, you just need to run your python app, not the entire stack: postgress, proxy, etc
Docker purpose
The main docker goal or feature is to enable developers to package applications into containers which are easy to deploy anywhere, simplifying your infrastructure.
So, in this sense, docker is not strictly for the developer stage. In the developer stage, the programmer should use an specialized IDE (eclipse, intellij, visual studio, etc) to create and update the source code. Also some languages like java, c# and frameworks like react/ angular needs a build stage.
These IDEs has features like hot reload (automatic application updates when source code change), variables & methods auto-completion, etc. These features achieve to reduce the developer time.
Docker for source code changes by developer
Is not the main goal but if you don't have an specialized ide or you are in a very limited developer workspace(no admin permission, network restrictions, windows, ports, etc ), docker can rescue you
If you are a java developer (for instance), you need to install java on your machine and some IDE like eclipse, configure the maven, etc etc. With docker, you could create an image with all the required techs and the establish a kind of connection between your source code and the docker container. This connection in docker is called Volumes
docker run --name my_job -p 9000:8080 \
-v /my/python/microservice:/src \
python-workspace-all-in-one
In the previous example, you could code directly on /my/python/microservice and you only need to enter into my_job and run python /src/main.py. It will work without python or any requirement on your host machine. All will be in python-workspace-all-in-one
In case of technologies that need a build process: java & c#, there is a time penalty because, the developer should perform a build on any source code change. This is not required with the usage of specialized ide as I explained.
I case of technologies who not require build process like: php, just the libraries/dependencies installation, docker will work almost the same as the specialized IDE.
Docker for local development with hot-reload
In your case, your app is based on python. Python don't require a build process. Just the libraries installation, so if you want to develop with python using docker instead the classic way: install python, execute python app.py, etc you should follow these steps:
Don't copy your source code to the container
Just pass the requirements.txt to the container
Execute the pip install inside of container
Run you app inside of container
Create a docker volume : your source code -> internal folder on container
Here an example of some python framework with hot-reload:
FROM python:3
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install -r requirements.txt
CMD [ "mkdocs", "serve", "--dev-addr=0.0.0.0:8000" ]
and how build as dev version:
docker build -t myapp-dev .
and how run it with volumes to sync your developer changes with the container:
docker run --name myapp-dev -it --rm -p 8000:8000 -v $(pwd):/usr/src/app mydocs-dev
As a summary, this would be the flow to run your apps with docker in a developer stage:
start the requirements before the app (database, apis, etc)
create an special Dockerfile for development stage
build the docker image for development purposes
run the app syncing the source code with container (-v)
developer modify the source code
if you can use some kind of hot-reload library on python
the app is ready to be opened from a browser
Docker for local development without hot-reload
If you cannot use a hot-reload library, you will need to build and run whenever you want to test your source code modifications. In this case, you should copy the source code to the container instead the synchronization with volumes as the previous approach:
FROM python:3
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN pip install -r requirements.txt
RUN mkdocs build
WORKDIR /usr/src/app/site
CMD ["python", "-m", "http.server", "8000" ]
Steps should be:
start the requirements before the app (database, apis, etc)
create an special Dockerfile for development stage
developer modify the source code
build
docker build -t myapp-dev.
run
docker run --name myapp-dev -it --rm -p 8000:8000 mydocs-dev
I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!
i run docker-compose up -d
report error
D:\project\c\test\docker>docker-compose up -d
Starting debug ... error
ERROR: for debug Cannot start service gdbserver: error while creating mount source path '/host_mnt/d/project/c/test/docker': mkdir /host_mnt/d: file exists
ERROR: for gdbserver Cannot start service gdbserver: error while creating mount source path '/host_mnt/d/project/c/test/docker': mkdir /host_mnt/d: file exists
ERROR: Encountered errors while bringing up the project.
Dockerfile
FROM ubuntu:cosmic
########################################################
# Essential packages for remote debugging and login in
########################################################
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
apt-utils gcc g++ openssh-server cmake build-essential gdb gdbserver rsync vim
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# 22 for ssh server. 7777 for gdb server.
EXPOSE 22 7777
RUN useradd -ms /bin/bash debugger
RUN echo 'debugger:pwd' | chpasswd
########################################################
# Add custom packages and development environment here
########################################################
########################################################
CMD ["/usr/sbin/sshd", "-D"]
docker-compose.yaml
# From: https://github.com/shuhaoliu/docker-clion-dev/blob/master/docker-compose.yml
version: '3'
services:
gdbserver:
build:
context: ./
dockerfile: ./Dockerfile
image: clion_dev
security_opt:
- seccomp:unconfined
container_name: debug
ports:
- "7776:22"
- "7777:7777"
volumes:
- .:/home/debugger/code
working_dir: /home/debugger/code
hostname: debug
I also experienced this issue (using Docker Desktop for Windows). For me, I simply issued a restart for Docker by using the following steps:
Locate the Docker Desktop Icon (in the Windows Task Bar)
Right-Click the Docker Desktop Icon and choose "Restart..."
Click the "Restart" button on the confirmation dialog that appears
When Docker Desktop had restarted, I then:
Checked for running containers using docker-compose ps
Stopped any running containers using docker-compose down
Started the containers docker-compose up
And Voila! All containers restarted successfully with no mount errors.
Right click on Docker on Window icon > Setting > Share driver > Reset credentials ... . For me that fix the problems.
image
My Docker container builds fine on OSX:
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
But doesn't build on Amazon Linux:
Docker version 17.12.0-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
docker-compose version 1.20.1, build 5d8c71b
Full Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
Full docker-compose.yml
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
The command I'm running is: docker-compose up --build
The result on Amazon Linux is:
Running setup.py install for pymongo: started
Running setup.py install for pymongo: finished with status 'done'
Running setup.py install for pluggy: started
Running setup.py install for pluggy: finished with status 'done'
Running setup.py install for coverage: started
Running setup.py install for coverage: finished with status 'done'
Successfully installed Faker-0.8.12 Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 astroid-1.6.2 attrs-17.4.0 backports.functools-lru-cache-1.5 beautifulsoup4-4.5.1 click-6.7 configparser-3.5.0 coverage-4.5.1 enum34-1.1.6 funcsigs-1.0.2 futures-3.2.0 gunicorn-19.7.1 ipaddress-1.0.19 isort-4.3.4 itsdangerous-0.24 lazy-object-proxy-1.3.1 mccabe-0.6.1 more-itertools-4.1.0 pluggy-0.6.0 py-1.5.3 py4j-0.10.6 pylint-1.8.3 pymongo-3.6.1 pytest-3.5.0 pytest-cov-2.5.1 python-dateutil-2.7.2 singledispatch-3.4.0.3 six-1.11.0 text-unidecode-1.2 wrapt-1.10.11
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
npm WARN deprecated redux-mock-store#1.5.1: breaking changes in minor version
> base62#1.2.7 postinstall /lts-app/node_modules/base62
> node scripts/install-stats.js || exit 0
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r "requirements.txt" && npm install --no-optional && npm run build' returned a non-zero code: 1
Does anyone know what might be causing this discrepancy? The error message from Docker doesn't give many clues. I'd be very grateful for any ideas others can offer!
To solve this problem, I followed #MazelTov's advice and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.