How to start a docker container on Ubuntu 19.10 boot? - docker

How to configure my system to start automatically a docker container on boot?

I'm using a local docker container to access database. Always that i restart my system, i need start manually the container with the command docker start container_name.
To resolve it, i added the script at Startup Applications.
I followed the bellow steps:
Open the Startup Applications with command gnome-sessions-properties;
Add the command docker start container_name.
Done. Always on boot the container will be started.

Related

Docker container Ubuntu SSH access

I installed a new docker container (the standard Ubuntu latest version).
I would like connect on it trough SSH access. I followed instructions on this excellent link https://linuxconfig.org/how-to-connect-to-docker-container-via-ssh
Once I stop my container and restart it, SSH service is not available anymore.
I have to start it manually anytime.
I tried this command too "systemctl enable ssh" to configure ssh as permanent.
The result is as followed:
"Synchronizing state of ssh.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ssh"
So everything seems be ok, but when I stop the container and restart it, the problem is still present, no ssh service started on the ubuntu.
Someone knows how to configure SSH access as permanent on this case?
Thank you all by advance for your help :)
You have to write a customized Dockerfile, and inside it instrument the SSH configuration so, each time you run the container, it will have a working SSH daemon.
About the issue that when you rerun the container it loses the SSH configuration is caused by the fact that you a craeting a new container from the original (non SSH configured) Ubuntu image. If you want to run the configured container you have to get the containers lsit by docker container ls --all' and copy the ID, the run the container by docker run {{ID}}`.

what happens when we restart a running CA server container?

if i restart my running CA server container will my network stop working after restart and will i loose any data ? is it okay to do this or not ?
I just don't want to loose any data because my network is in pro environment.
docker stop preserves the container, so no data is lost.
What you need to watch out for is the docker rm command or the docker-compose down command.
These commands evaporate the container. If container information is important, you can back up an existing container to an image using the docker commit command.

Cloud-init to configure an Ubuntu docker container?

Is it possible to use a cloud-init configuration file to define commands to be executed when a docker container is started?
I'd like to test the provisioning of an Ubuntu virtual machine using a docker container.
My idea is to provide the same cloud-init config file to an Ubuntu docker container.
No. If you want to test a VM setup, you need to use actual virtualization technology. The VM and Docker runtime environments are extremely different and you can't just substitute one technology for the other. A normal Linux VM startup will run a raft of daemons and startup scripts – systemd, crond, sshd, ifconfig, cloud-init, ... – but a Docker container will start none of these and will only run the single process in the container.
If your cloud-init script is ultimately running a docker run command, you can provide an alternate command to that container the same way you could docker run on your development system. But a Docker container won't look to places like the EC2 metadata service to find its own configuration usually, and it'd be unusual for a container to run cloud-init at all.

How to run build on docker container in coreos?

I have installed CoreOS as my build environment. I installed Jenkins server as a docker container in CoreOS. And I created a free style project on the Jenkins server to build my project. How can I configure the build run on docker containers on the CoreOS?
So the structure is: CoreOS is my physical machine. Jenkins server is running in a docker container in the CoreOS. And I want to launch more docker containers to run my application. How can I achieve this? The hardest part I think is to launch a docker container in CoreOS from Jenkins JOB. I want to start a new docker container ever time for a build.
I'm not familiar with Jenkins, but I would suggest that you take a look at the docker-machine and docker-compose utilities.
You should be able to have Jenkins use one of those to have the host start your build container.

Docker-Compose with Docker 1.12 "Swarm Mode"

Does anyone know how (if possible) to run docker-compose commands against a swarm using the new docker 1.12 'swarm mode' swarm?
I know with the previous 'Docker Swarm' you could run docker-compose commands directly against the swarm by updating the DOCKER_HOST to point to the swarm master :
export DOCKER_HOST="tcp://123.123.123.123:3375"
and then simply execute commands as if you were running them against a single instance of Docker engine.
OR is this functionality something that docker-compose bundle is replacing?
I realized my question was vaguely worded and actually has two parts to it. Eventually however, I was able to figure out solutions to both issues.
1) Can you run commands directly 'against' a swarm / swarm-mode in Docker 1.12 running on a remote machine?
While you can't really run commands 'against' a swarm you CAN run docker service commands on the master node of a swarm in order to run services on that swarm.
You can also configure the Docker daemon (the docker daemon that is the master node of the swarm) to listen on TCP ports in order to externally expose the Docker API.
2) Can you still use docker-compose files to start services in Docker 1.12 swarm-mode?
Yes, although these features are currently part of Docker's "experimental" features. This means you must download/install the version that includes the experimental features (check the github).
You essentially follow these instructions https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md
to go from the docker-compose.yml file to a distributed application bundle and then to an application stack (this is when your services are actually run).
$ docker-compose bundle
$ docker deploy [OPTIONS] STACK
Here's what I did:
On my remote swarm manager node I started docker with the following options:
docker daemon -D -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 &
This configures Docker daemon to listen on the standard docker socket unix:///var/run/docker.sock AND on localhost:2375.
WARNING : I'm not enabling TLS here just for simplicity
On my local machine I update the docker host environment variable to point at my swarm master node.
$ export DOCKER_HOST="tcp://XX.XX.XX.XX:2377" (populate with your IP)
Navigate to the directory of my docker-compose.yml file
Create a bundle file from my docker-compose.yml file. Make sure to include the .dab extension.
docker-compose bundle --fetch-digests -o myNewBundleFile.dab
Create an application stack from the bundle file. Do not specify the .dab extension here.
$ docker deploy myNewBundleFile
Now I'm still experiencing some networking related issues but I have successfully gotten my service up and running from my unmodified docker-compose.yml files. The network issues I'm experiencing is documented here : https://github.com/docker/docker/issues/23901
While the official support for Swarm mode in Docker Compose is still in progress, I've created a simple script that takes docker-compose.yml file and runs docker service commands for you. See https://github.com/ddrozdov/docker-compose-swarm-mode for details.
It is not possible. Compose uses containers to create a client-side concept of a service. Docker 1.12 Swarm mode introduces a new server-side concept of a service.
You are correct that docker-compose bundle; docker stack deploy is the way to get a Compose file running in Swarm Mode.

Resources