Node Manager is reachable but the managed server is not reachable - docker

I'm facing a problem when configuring multi-host environment on docker machine. I have two machines Master and Worker. The worker has node manager running and is reachable from the Master. But the managed server running on the worker machine is not reachable.
Can anyone help me through this.
Docker file:
# LICENSE CDDL 1.0 + GPL 2.0
#
# Copyright (c) 2014-2015 Oracle and/or its affiliates. All rights reserved.
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This Dockerfile extends the Oracle WebLogic image by creating a sample domain.
#
# The 'base-domain' created here has Java EE 7 APIs enabled by default:
# - JAX-RS 2.0 shared lib deployed
# - JPA 2.1,
# - WebSockets and JSON-P
#
# Util scripts are copied into the image enabling users to plug NodeManager
# magically into the AdminServer running on another container as a Machine.
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sudo docker build -t 1213-domain --build-arg ADMIN_PASSWORD=welcome1 .
#
# Pull base image
# ---------------
FROM oracle/weblogic:12.1.3-developer
# Maintainer
# ----------
MAINTAINER Bruno Borges <bruno.borges#oracle.com>
# WLS Configuration
# -------------------------------
ARG ADMIN_PASSWORD
ENV DOMAIN_NAME="base_domain" \
DOMAIN_HOME="/u01/oracle/user_projects/domains/base_domain" \
JAVA_OPTIONS="${JAVA_OPTIONS} -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -server -XX:PermSize=512m -XX:MaxPermSize=512m -Dapplication.properties.location=/u01/oracle/user_projects/domains/base_domain -DFundCenterProperties.Location=/u01/oracle/user_projects/domains/base_domain/FundCenterProperties.xml -Dlog4j.configuration=file:/u01/oracle/user_projects/domains/base_domain/log4j.properties" \
ADMIN_PORT="7007" \
ADMIN_HOST="wlsadmin" \
NM_PORT="5556" \
MS_PORT="7003" \
CONFIG_JVM_ARGS="-Dweblogic.security.SSL.ignoreHostnameVerification=true" \
PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin:/u01/oracle
# Add files required to build this image
USER root
COPY container-scripts/* /u01/oracle/
USER oracle
# Configuration of WLS Domain
WORKDIR /u01/oracle
RUN /u01/oracle/wlst /u01/oracle/create-wls-domain.py && \
mkdir -p /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security && \
echo "username=weblogic" > /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security/boot.properties && \
echo "password=$ADMIN_PASSWORD" >> /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security/boot.properties && \
echo ". /u01/oracle/user_projects/domains/base_domain/bin/setDomainEnv.sh" >> /u01/oracle/.bashrc && \
echo "export PATH=$PATH:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin" >> /u01/oracle/.bashrc && \
cp /u01/oracle/commEnv.sh /u01/oracle/wlserver/common/bin/commEnv.sh && \
rm /u01/oracle/create-wls-domain.py /u01/oracle/jaxrs2-template.jar
COPY log4j.properties /u01/oracle/user_projects/domains/base_domain/
EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT 5005
# Expose Node Manager default port, and also default http/https ports for admin console
WORKDIR $DOMAIN_HOME
# Define default command to start bash.
CMD ["startWebLogic.sh"]
Docker run command:
docker run -d --name webapp -p 7007:7007 -p 7003:7003 -p 5556:5556 -v /c/webapp:/webapp 1213-domain
By the way I'm using weblogic 12.1.3 image for this deployment.

Related

Share using NFS a volume mounted with gcsfuse in Kubernetes

I've been trying to mount a Google Cloud bucket inside a pod in our (onprem) cluster in order to share that mounted volume using NFS to other pods and PersistentVolumes.
Here there are the configurations:
#!/bin/bash
_start_nfs() {
exportfs -a
rpcbind
rpc.statd
rpc.nfsd
rpc.mountd
GOOGLE_APPLICATION_CREDENTIALS=/accounts/key.json gcsfuse -o allow_other --dir-mode 777 --uid 1500 --gid 1500 ${BUCKET} /exports
}
_nfs_server_mounts() {
IFS=':' read -r -a MNT_SERVER_ARRAY <<< "$NFS_SERVER_DIRS"
for server_mnt in "${MNT_SERVER_ARRAY[#]}"; do
if [[ ! -d $server_mnt ]]; then
mkdir -p $server_mnt
fi
chmod -R 777 $server_mnt
cat >> /etc/exports <<EOF
${server_mnt} *(rw,sync,no_subtree_check,all_squash,anonuid=1500,anongid=1500,fsid=$(( ( RANDOM % 100 ) + 200 )))
EOF
done
cat /etc/exports
}
_sysconfig_nfs() {
cat > /etc/sysconfig/nfs <<EOF
#
#
# To set lockd kernel module parameters please see
# /etc/modprobe.d/lockd.conf
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
RPCNFSDARGS=""
# Number of nfs server processes to be started.
# The default is 8.
RPCNFSDCOUNT=${RPCNFSDCOUNT}
#
# Set V4 grace period in seconds
#NFSD_V4_GRACE=90
#
# Set V4 lease period in seconds
#NFSD_V4_LEASE=90
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
#MOUNTD_PORT=892
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
STATDARG=""
# Port rpc.statd should listen on.
#STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
#STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to sm-notify. See sm-notify(8)
SMNOTIFYARGS=""
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
RPCIDMAPDARGS=""
#
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
# Note: The rpc-gssd service will not start unless the
# file /etc/krb5.keytab exists. If an alternate
# keytab is needed, that separate keytab file
# location may be defined in the rpc-gssd.service's
# systemd unit file under the ConditionPathExists
# parameter
RPCGSSDARGS=""
#
# Enable usage of gssproxy. See gssproxy-mech(8).
GSS_USE_PROXY="yes"
#
# Optional arguments passed to blkmapd. See blkmapd(8)
BLKMAPDARGS=""
EOF
}
### main ###
_sysconfig_nfs
_nfs_server_mounts
_start_nfs
rpcinfo -p
showmount -e
tail -f /dev/null
The Dockerfile:
FROM centos:7
RUN yum -y install /usr/bin/ps nfs-utils nfs4-acl-tools curl portmap fuse nfs-utils && yum clean all
RUN mkdir -p /exports
ENV RPCNFSDCOUNT=8 \
NFS_SERVER_DIRS='/usr'
RUN chown nfsnobody:nfsnobody /exports
RUN chmod 777 /exports
ADD setup.sh /usr/local/bin/run_nfs.sh
RUN chmod +x /usr/local/bin/run_nfs.sh
RUN useradd -u 1500 orenes
ADD gcsfuse-0.41.6-1.x86_64.rpm /data/gcsfuse-0.41.6-1.x86_64.rpm
RUN yum install -y /data/gcsfuse-0.41.6-1.x86_64.rpm
# Expose volume
VOLUME ["/exports"]
# expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
ENTRYPOINT ["/usr/local/bin/run_nfs.sh"]
Let's assume that there's a PersistentVolume mounting a Service pointing to the NFS server I described before.
So far I got access denied, unable to receive, etc...
Should I move to a SMB/CIFS sharing? Is there any solution on this? Thanks in advance.
It seems that FUSE and NFS don't get well along. I ended up using SMB and it works flawlessly but you must use the Service ClusterIP instead of Kubernetes's DNS.
The comment from #AviD about using a CSI is so helpful if you need to make it quick.
Again thanks to all.

Use .env file variables during Docker build

I am trying to use the sed command to replace variables during docker build. The variable I am attempting to do (to start) is $DATABASE_HOST. The value for that is coming from my .env file. I am reading online that environment variables are only available during run time if they come from the .env file. Due to this, my sed command is not registering.
Dockerfile:
# Dockerfile for Sphinx SE
# https://hub.docker.com/_/alpine/
FROM alpine:3.12
# https://sphinxsearch.com/blog/
ENV SPHINX_VERSION 3.4.1-efbcc65
# Install dependencies
RUN apk add --no-cache mariadb-connector-c-dev \
postgresql-dev \
wget \
sed
# set up and expose directories
RUN mkdir -pv /opt/sphinx/log /opt/sphinx/index
VOLUME /opt/sphinx/index
# http://sphinxsearch.com/downloads/sphinx-3.3.1-b72d67b-linux-amd64-musl.tar.gz
RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}-linux-amd64-musl.tar.gz -O /tmp/sphinxsearch.tar.gz \
&& cd /opt/sphinx && tar -xf /tmp/sphinxsearch.tar.gz \
&& rm /tmp/sphinxsearch.tar.gz
# point to sphinx binaries
ENV PATH "${PATH}:/opt/sphinx/sphinx-3.4.1/bin"
RUN indexer -v
# redirect logs to stdout
RUN ln -sv /dev/stdout /opt/sphinx/log/query.log \
&& ln -sv /dev/stdout /opt/sphinx/log/searchd.log
# expose TCP port
EXPOSE 36307
EXPOSE 9306
# Copy base sphinx.conf file to container
VOLUME /opt/sphinx/conf
COPY ./sphinx.conf /opt/sphinx/conf/sphinx.conf
# Copy all docker sphinx.conf files
COPY ./configs/web-finder/docker/ /opt/sphinx/conf/
# look for and replace
RUN sed -i "s+DATABASE_HOST+${DATABASE_HOST}+g" /opt/sphinx/conf/sphinx.conf
# Concat the sphinx.conf files for all apps
# RUN cat /tmp/myconfig.append >> /etc/portage/make.conf && rm -f /tmp/myconfig.append
CMD indexer --all --config /opt/sphinx/conf/sphinx.conf \
&& searchd --nodetach --config /opt/sphinx/conf/sphinx.conf
.env file:
DATABASE_HOST=someport
DATABASE_USERNAME=someusername
DATABASE_PASSWORD=somepassword
DATABASE_SCHEMA=someschema
DATABASE_PORT=3306
SPHINX_PORT=36307
sphinx.conf:
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
With sphinx the 'sphinx.conf' file can be 'executable'. Ie it can actully be a 'shell script' (or PHP, perl etc!)
Assuming your .env file makes real (runtime!) environment variables within the container (not overly familiar with Docker), then your sphinx.conf file could be ...
#!/bin/sh
set -eu
cat <<EOF
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
EOF
And because it a shell script, the variables will automatically be expanded :)
Need it executable too!
RUN chmod a+x /opt/sphinx/conf/sphinx.conf
Then dont need the sed command in Dockerfile at all!

Passing Google service account credentials to Docker

My use case is a little different than others with this problem, so a little up-front description:
I am working on Google Cloud and have a "dockerized" Django app. Part of the app depends on using gsutil for moving files to/from a Google Storage bucket. For various reasons, we do not want to use Google Container Engine to manage our containers. Rather, we would like to scale horizontally by starting additional Google Compute VMs which will, in turn, run this Docker container. Similar to https://cloud.google.com/python/tutorials/bookshelf-on-compute-engine except we will use a container rather than pulling a git repository.
The VM's will be built from a basic Debian image, and the startup and installation of dependencies (e.g. Docker itself) will be orchestrated with a startup script (e.g. gcloud compute instances create some-instance --metadata-from-file startup-script=/path/to/startup.sh).
If I manually create a VM, elevate with sudo -s, run gsutil config -f (which creates a credential file at /root/.boto) and then run my docker container (see Dockerfile below) with
docker run -v /root/.boto:/root/.boto username/gs gsutil ls gs://my-test-bucket
then it works. However, that requires my interaction to create the boto file.
My question is: How can I pass the default service credentials to the Docker container that will be starting in that new VM?
gsutil works out of the box on even a "fresh" Debian VM since it is using the default compute engine credentials that all VMs are loaded with. Is there a way to use those credentials and pass them to the docker container? After the first call to gsutil on a fresh VM, I've noticed that it creates ~/.gsutil and ~/.config folders. Unfortunately, mounting both of those in Docker with
docker run -v ~/.config/:/root/.config -v ~/.gsutil:/root/.gsutil username/gs gsutil ls gs://my-test-bucket
does not fix my problem. It tells me:
ServiceException: 401 Anonymous users does not have storage.objects.list access to bucket my-test-bucket.
A minimal gsutil Dockerfile (not mine):
FROM alpine
#install deps and install gsutil
RUN apk add --update \
python \
py-pip \
py-cffi \
py-cryptography \
&& pip install --upgrade pip \
&& apk add --virtual build-deps \
gcc \
libffi-dev \
python-dev \
linux-headers \
musl-dev \
openssl-dev \
&& pip install gsutil \
&& apk del build-deps \
&& rm -rf /var/cache/apk/*
CMD ["gsutil"]
Addition: a workaround:
I have since solved my issue, but it is quite roundabout so I'm still interested in a simpler way, if possible. All the details are below:
First, a description:
I first created a service account in the web console. I then save the JSON keyfile (call it credentials.json) into a storage bucket. In the startup script for the GCE VM, I copy that keyfile to the local filesystem (gsutil cp gs://<bucket>/credentials.json /gs_credentials/). I then start my docker container, mounting that local directory. Then, as the docker container starts, it runs a script that authenticates the credentials.json (which creates a .boto file inside the docker), export BOTO_PATH=, and finally I can perform gsutil operations in the Docker container.
Here are the files for a small working example:
Dockerfile:
FROM alpine
#install deps and install gsutil
RUN apk add --update \
python \
py-pip \
py-cffi \
py-cryptography \
bash \
curl \
&& pip install --upgrade pip \
&& apk add --virtual build-deps \
gcc \
libffi-dev \
python-dev \
linux-headers \
musl-dev \
openssl-dev \
&& pip install gsutil \
&& apk del build-deps \
&& rm -rf /var/cache/apk/*
# install the gcloud SDK-
# this allows us to use gcloud auth inside the container
RUN curl -sSL https://sdk.cloud.google.com > /tmp/gcl \
&& bash /tmp/gcl --install-dir=~/gcloud --disable-prompts
RUN mkdir /startup
ADD gsutil_docker_startup.sh /startup/gsutil_docker_startup.sh
ADD get_account_name.py /startup/get_account_name.py
ENTRYPOINT ["/startup/gsutil_docker_startup.sh"]
gsutil_docker_startup.sh: Takes a single argument, which is the path to a JSON-format service account credentials file. The file exists because the directory on the host machine was mounted in the container.
#!/bin/bash
CRED_FILE_PATH=$1
mkdir /results
# List the bucket, see that it gives a "ServiceException:401"
gsutil ls gs://<input bucket> > /results/before.txt
# authenticate the credentials- this creates a .boto file:
/root/gcloud/google-cloud-sdk/bin/gcloud auth activate-service-account --key-file=$CRED_FILE_PATH
# need to extract the service account which is like:
# <service acct ID>#<google project>.iam.gserviceaccount.com"
SERVICE_ACCOUNT=$(python /startup/get_account_name.py $CRED_FILE_PATH)
# with that service account, we can locate the .boto file:
export BOTO_PATH=/root/.config/gcloud/legacy_credentials/$SERVICE_ACCOUNT/.boto
# List the bucket and copy the file to an output bucket for good measure
gsutil ls gs://<input bucket> > /results/after.txt
gsutil cp /results/*.txt gs://<output bucket>/
get_account_name.py:
import json
import sys
j = json.load(open(sys.argv[1]))
sys.stdout.write(j['client_email'])
Then, the GCE startup script (executed automatically as the VM is started) is:
#!/bin/bash
# <SNIP>
# Install docker, other dependencies
# </SNIP>
# pull docker image
docker pull userName/containerName
# get credential file:
mkdir /cloud_credentials
gsutil cp gs://<bucket>/credentials.json /cloud_credentials/creds.json
# run container
# mount the host machine directory where the credentials were saved.
# Note that the container expects a single arg,
# which is the path to the credential file IN THE CONTAINER
docker run -v /cloud_credentials:/cloud_credentials \
userName/containerName /cloud_credentials/creds.json
You can assign a Specific Service Account to your instance and then use the Application Default Credential in your code. Please verify these points before testing.
Set the instance access scopes to: "Allow full access to all Cloud APIs" as they are not really a security feature
Set the right role to you service account: "Storage Object Viewer"
Authentication Token are retrieved automatically by Application Default Credential via Google Metadata Server which is available from your instance and your Docker containers as well. There is no need to manage any credentials.
def implicit():
from google.cloud import storage
# If you don't specify credentials when constructing the client, the
# client library will look for credentials in the environment.
storage_client = storage.Client()
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
I also quickly tested with docker and I worked perfectly
yann#test:~$ gsutil cat gs://my-test-bucket/hw.txt
Hello World
yann#test:~$ docker run --rm google/cloud-sdk gsutil cat gs://my-test-bucket/hw.txt
Hello World

Using ccache in automated builds on Docker cloud

I am using automated builds on Docker cloud to compile a C++ app and provide it in an image.
Compilation is quite long (range 2-3 hours) and commits on github are frequent (~10 to 30 per day).
Is there a way to keep the building cache (using ccache) somehow?
As far as I understand it, docker caching is useless since the compilation layer producing the ccache will not be used due to the source code changes.
Or can we tweak to bring some data back to first layer?
Any other solution? Pushing it somewhere?
Here is the Dockerfile:
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG}
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install \
&& rm -rf /usr/src/QGIS
WORKDIR /
You should try saving and restoring your cache data from a third party service:
- an online object storage like Amazon S3
- a simple FTP server
- an Internet available machine with ssh to make a scp
I'm assuming that your cache data is stored inside the ´~/.ccache´ directory
Using Docker multistage build
From some time, Docker supports Multi-stage builds and you can try using it to implement the solution with a single Dockerfile:
Warning: I've not tested it
# STAGE 1 - YOUR ORIGINAL DOCKER FILE CUSTOMIZED
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG} as builder
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
# restore cache
RUN curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2
RUN tar -xjvf ccache.tar.bz2
COPY --from=downloader /.ccache ~/.ccache
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install
# save the current cache online
WORKDIR ~/
RUN tar -cvjSf ccache.tar.bz2 .ccache
RUN curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
# STAGE 2
FROM alpine:latest
# YOUR CUSTOM LOGIC TO CREATE THE FINAL IMAGE WITH ONLY REQUIRED BINARIES
# USE THE FROM IMAGE YOU NEED, this is only an example
# E.g.:
# COPY --from=builder /usr/src/QGIS/build/YOUR_EXECUTABLE /usr/bin
# ...
In the stage 2 you will build the final image that will be pushed to your repository.
 Using Docker cloud hooks
Another, but less clear, approach could be using a Docker Cloud pre_build hook file to download cache data:
#!/bin/bash
echo "=> Downloading build cache data"
curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2 # e.g. Amazon S3 like service
cd /
tar -xjvf ccache.tar.bz2
Obviously you can use dedicate docker images to run curl or tar mounting the local directory as a volume in this script.
Then, copy the .ccache extracted folder inside your container during the build, using a COPY command before your cmake call:
WORKDIR /usr/src/QGIS/build
COPY /.ccache ~/.ccache
RUN cmake ...
In order to make this you should find a way to upload your cache data after the build and you could make this easily using a post_build hook file:
#!/bin/bash
echo "=> Uploading build cache data"
tar -cvjSf ccache.tar.bz2 ~/.ccache
curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
But your compilation data aren't available from the outside, because they live inside the container. So you should upload the cache after the cmake command inside your main Dockerfile:
RUN cmake...
&& tar ...
&& curl ...
&& ninja ...
&& rm ...
If curl or tar aren't available, just add them to your container using the package manager (qgis/qgis3-build-deps is based on Ubuntu 16.04, so they should be available).

Vagrant synced_folder not working with docker provider build_dir (Windows)

I don't succeed providing a dockerfile via Vagrant on Windows. If I use an image (e.g. d.image = "phusion/baseimage" instead of build_dir everything is fine - but when building from a dockerfile (as shown in the vagrantfile below) - I get the following error (of course I have a Dockerfile in infrastructure/ssh-docker!):
PS C:\privat\cloud-backup\cloud-backup-for-podio> vagrant up
Bringing machine 'app' up with 'docker' provider...
==> app: Docker host is required. One will be created if necessary...
app: Docker host VM is already ready.
==> app: Syncing folders to the host VM...
app: Preparing SMB shared folders...
app: Mounting SMB shared folders...
C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/guests/linux/cap/choose_addressable_ip_addr.rb:7:in `block
in choose_addressable_ip_addr': undefined method `each' for nil:NilClass (NoMethodError)
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/guests/linux/cap/choose_addressable_ip_addr.rb:6:in `tap'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/guests/linux/cap/choose_addressable_ip_addr.rb:6:in `choose_addressable_ip_addr'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/capability_host.rb:111:in `call'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/capability_host.rb:111:in `capability'
...
Vagrantfile:
Vagrant.configure("2") do |config
config.ssh.username = 'vagrant'
config.ssh.password = 'tcuser'
config.ssh.port = 22
config.vm.define "app" do |app|
app.vm.synced_folder ".", "/vagrant", type: "smb", smb_host: "MY_IP", smb_username: "WINUSER#DOMAIN", smb_password: "WINPASSWORD"
app.vm.provider "docker" do |d|
#d.image = "phusion/baseimage"
d.build_dir = "infrastructure/ssh-docker"
d.name = "app"
d.remains_running = true
end
end
end
Dockerfile:
FROM phusion/baseimage
ENV HOME /root
# enable ssh
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN apt-get update
RUN apt-get install -y openssh-server wget lsb-release sudo
EXPOSE 22
RUN mkdir -p /var/run/sshd
RUN chmod 0755 /var/run/sshd
# Create and configure vagrant user
RUN useradd --create-home -s /bin/bash vagrant
WORKDIR /home/vagrant
# Configure SSH access
RUN mkdir -p /home/vagrant/.ssh
RUN echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
RUN chown -R vagrant: /home/vagrant/.ssh
RUN echo -n 'vagrant:vagrant' | chpasswd
# Enable passwordless sudo for the "vagrant" user
RUN mkdir -p /etc/sudoers.d
RUN install -b -m 0440 /dev/null /etc/sudoers.d/vagrant
RUN echo 'vagrant ALL=NOPASSWD: ALL' >> /etc/sudoers.d/vagrant
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Vagrant version: 1.7.4
Anyone has some idea?
(What I need to do is run a docker image from a dockerfile and have a shared/synced directory..)
You should provide your ip in for smb service
config.vm.synced_folder ".", "/home/vagrant/github-api", type: 'smb', smb_host: "192.168.1.100"

Resources