I have a custom AMI that has my app directory and a docker image. I'm setting up Auto Scale Group with Launch Configuration to create a new instance. I have a User Data script to boot up the application. This is the code:
#!/bin/bash
docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
the script runs, but the app doesn't run. I can SSH and run the app manually which works. Looking at the cloud-init-output.log file, I'm getting the following:
/var/lib/cloud/instance/scripts/part-001: line 4: docker-compose: command not found
Docker-compose is available when I SSH as I've installed it before creating my custom AMI.
Anything I'm missing?
Doesn't matter regarding your best practice question. Either way would suffice.
HakRou is right however.
The boot strap is operating under a different security context / shell environment so you need to cater for that.
You could just put the entire path to the binary file as well such as:
/usr/local/bin/docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
and see how that goes.
docker-compose might have been available to the user you used to SSH into your instance (like ec2-user, ubuntu or admin), but it might not be available to root, and root is the one used with user-data when Amazon spins a new instance.
So you might want to add a soft link of docker-compose in one of the folders in the root $PATH, /usr/bin for exemple.
Related
In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...
I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...
Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile
Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps
I am using Liberty Docker version to test an alternative to Liberty boilerplate in BlueMix: https://hub.docker.com/_/websphere-liberty/
Creating the image is a slow process. I wonder if there is a recommended approach to re-deploy an application war. Currently I have to recreate the docker file every time I make a change in my App.
The app is deployed in /config/apps/ directory along with some shared libraries , bootstrap.properties file, ...
Have you actually looked at Usage section on that page?
In the point 1) it is described:
A .WAR file can therefore be mounted in the dropins directory of this
server and run. The following example starts a container in the
background running a .WAR file from the host file system
$ docker run -d -p 80:9080 -p 443:9443 \
-v /tmp/DefaultServletEngine/dropins/Sample1.war:/config/dropins/Sample1.war \
websphere-liberty:webProfile7
This should allow you to dynamically update the application without any need to rebuild the image.
You have there also other examples which show how to mount whole config folder.