I'm learning to work with docker/docker-compose, and having a few issues now.
I can't really understand how it works, for example, if i run php7.2 image - what operating system is it running on ?
Currently i'm thinking about working on my new project with docker, but can't find a way to create docker configs.
What i want now is: Use base image/service with operating system - ubuntu, then i need extend that image/service and add services like java, mysql, also need to checkout 2 repositories which should be later run on that ubuntu service with java and mysql. How can i do that ? I've tried googling for some examples, but i'm not been able to find any good example.
I would really appreciate any help with that. Thanks in advance
As start example you can read http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/
You don't need to extend your image with other services because the paradigm here is "one container = one service". Here best practices.
Related
I want setup a groupware server for my company. At the moment I use Zimbra but I'm looking for an alternative to Zimbra. The requirement is, it must be running in a docker container and best would be a groupware solution with official docker support.
Has someone an idea about a suitable product? One what is available as docker image and has comparable feature like Zimbra.
An other good solution would be, if the server is easy via script to install and easy to configure.
I have Websphere Application Server 8.5.5.14 hosting my ERP. I want to dockerize the application and deploy it into Kubernetes cluster. Can anyone provide me information on how to create image out of my existing WAS 8.5.5.14.
In theory you could do this by creating a tar ball of the filesystem and importing it into docker to make an image via something like:
cat WAS.tar | docker import - appImage
but there's going to be a number of issues you'll need to avoid, for example, if you have resources (jdbc drivers,resource adapters, etc), the tarball will need to have all of those included. You'll also need to expose all of the required ports for your app and its administration. A better way and best practice to solve this would be to start with an IBM supported image of traditional WAS and build your system atop it.
There are detailed instructions to do this at https://github.com/WASdev/ci.docker.websphere-traditional#docker-hub-image
F Rowe's answer is good; if you follow their advice of using the official images you will be using WebSphere v9.0 in the container. You can use this tool that can help figure out if there are any changes you need to make to your application in order to get it working in the container. It also generates some of the wsadmin scripts to configure the server in the image.
I have a PROD environment running on RHEL 7 server. I want to use docker for deployment. I want to package all the software and apps in a Docker image, without a base OS. Because I don't want to add an additional layer on top of RHEL. Also, I could not find an official base image for RHEL. Is that possible?
I see some old posts mentioned about "FROM scratch" but looks it does not work in the latest version of Docker -- 1.12.5.
If this is impossible, any suggestions for this?
Docker is designed to also abstract the OS dependencies - that is what it has been build for. Beside it also encapsulates the runtime, memory and things, it specifically is used as a extreme-better variant of chroot ( lets say chroot on ultra-steroids ).
It seems like you neither want the runtime seperation nor the OS layer seperation ( dependencies ) - thus docker makes absolutely no sense for you.
Deploying with docker the is not "simple" or simpler as using other tools. You can use capistrano or, probably something like https://www.habitat.sh/ which actually does not require a software to be bundled in docker containers to be "deployable", it also works on barebones and uses its own packaging format. Thus you have a state-of-the-art deployment solution, and with habitat, you can later even upgrade using docker-containers.
When trying to move a web container (Tomcat) to the latest technologies for better growth and support, I came across this blog. This part seems ideal for my needs:
... we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.
Now, how to setup a local test environment to try this out? All these technologies seem interchangable! I can run docker on mesos, mesos on docker, etc etc etc. Prepackaged instances allow me to run on others Clouds. Other videos also make this seem great! Running out on the cloud is not a viable (allowed) option for me. Unfortunately, I can not find 'instructions' on how to setup the configuration described/marketed/advertised.
If I am new to these technologies, and know there will be a learning curve, is there a way to get initialized for doing such a "simple task": running a tomcat container on a Docker machine that is running Mesos/Kubernetes? That is, without spending days trying to learn and figure out each individual part! This is the picture from the blog site referenced:
Assuming that I "only" know how to create a docker container(s) (for say, centos-7). What commands, in what order, (i.e. the secret 'code') do I need to use to configure small (2 or 3) local environment to try out running Tomcat?
Although I searched quite a bit, apparently not enough! Someone pointed me to this:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos-docker.md
which is pretty close to exactly what I was looking for.
I created a docker image with pre-installed packages in it (apache, mysql, memcached, solr, etc). Now I want to run a command in a container made from this image, and this command relies on all my packages. I want to have all of them started when I start a new container.
I tried to use /sbin/init, but it doesn't work in docker.
The general opinion is to use a process manager to do this. I wont go into the details here, since I wrote a blog on that: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
Note that another rather general opinion is to split your containers. MySQL generally is on a different container, but you can try to get that to work later on as well of course :)
I see that this is an old topic, however, for someone who just came across it - docker-compose can be used to connect multiple containers, so most of the processes can be split up in different containers. Furthermore, as mentioned earlier, different process managers can be used in order to run processes simultaneously and the one that I would like to mention is Chaperone. I find it really easy to use and slightly better than supervisor!
docker compose and docker sync -> You can not go wrong applying this concept.
-Glynn