Using Docker with official Progress OpenEdge RDBMS images - docker

I found out that Progress has provided official docker images for their RDBMS.
I managed to pull the following image:
docker pull store/progresssoftware/oedb:12.2.3_adv-ent
I tried following the instructions to set it up, but they ask you to edit files inside the image?.
I'm not totally sure if they want me to only use the zip versions of the images or pull the images directly from the docker hub? Or is the idea to create my own Dockerfile where I use these as base images, and then set and create the required files and changes there? I couldn´t find anyone using these images in the web.
Could somebody provide me with example ´docker run´ command or ´Dockerfile´ to use these things?

beware that these images are for development and testing purposes only - they are not supported for production
The custom container images can then be used to bring up and dispose of database instances on demand for the purposes of incrementally building and testing OpenEdge applications

docker load vs docker run
You use docker load when you have docker image in archive format. Sometimes you do not want to push image to public places and you do not have private repository.
In this case you can do docker save. This command generates archive of your container. Then you can send this archive to private ftp.
To get this image you need:
Download image
Run docker load
When your image is uploaded to docker repository and you have rights to get it, you can use docker pull command. This is preferred command.
Unfortunately I have no idea how to run this enterprise tool, but they provide instruction here: https://docs.progress.com/bundle/openedge-database-docker-container/page/Run-an-OpenEdge-database-Docker-container-image.html
So example is:
docker run -d -p <database_server_port>:<database_server_port> -p <database_minport>-<database_maxport>:<database_minport>-<database_maxport> -e DB_BROKER_PORT=<database_server_port>
-e DB_MINPORT=<database_minport> -e DB_MAXPORT=<database_maxport> <custom_image_name>
you may use it as:
docker run -d -p 5432:5432 -p 5435-5440:5435-5440 -e DB_BROKER_PORT=5444
-e DB_MINPORT=5435 -e DB_MAXPORT=5440 store/progresssoftware/oedb:12.2.3_adv-ent
UPD: correct port forwarding syntax

Related

Docker: how to handle "latest" in a CI context with automatic updates but without redundant downloads?

What I want to achieve is that I give a colleague a docker container to run locally (via docker run) on his PC. I build this often via Gitlab CI and push it with a version tag (Semver 2.0) to Nexus.
Colleague should get a simple bash script like so:
#!/bin/bash
docker run -it -p 8080:80 nexus.company.net/goodservice:latest --dependency-overrides=local
echo "find the good service under http://localhost:8080, have fun!"
("--dependency-overrides" is a simple method I just implemented so that he can run the whole thing without Redis, I replace the implementations in the DI container this way.)
Problem:
Once a version (say: 1.0.1-pre.5) is downloaded, "latest" doesn't do any updates anymore.
I could EASILY fix it by using "--pull=always" on docker run but it's a .NET image of overall size about 100 MB (it's alpine already, but it still is a lot). Colleague is on a metered 4G Internet connection.
Is there any method to make docker check if "latest" is something else now? Didn't find anything in the documentation.
(If somebody from docker (or CNCF?) reads this: would be great to have an option like "--pull=updated". :-))
Any ideas?
Add a docker pull command to your script. It'll only pull the image if it has been updated.
#!/bin/bash
docker pull nexus.company.net/goodservice:latest
docker run -it -p 8080:80 nexus.company.net/goodservice:latest --dependency-overrides=local
echo "find the good service under http://localhost:8080, have fun!"
If you want to limit the amount of data needed to be downloaded, you want to make sure that you typically only touch the last layer of the image when you create a new version. That way your colleague only needs to download the last layer and not the entire image.

Some questions on Docker basics?

I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.

How to run docker-compose with docker image?

I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.

Why the docker keeps both image and image container on VM?

The problem I encounter when working with large images is that Docker copies all the data when you create a container from it: so having 25gb image and a container for it totally takes about 50gb on Docker VM. Am I doing something wrong or does Docker always function like that? If so, why? E. g. in Git you may use the code directly after you clone the repo, most of the time you don't need to make one more additional copy of branch or whatever.
P. S. My use case is the following: I want to keep different versions of my MySQL database (because currently it is changed exclusively by developers and it happens not so often) and because I want to enable fast restoration (the only way MySQL allows restoration is from *.sql file and it takes 7 hours - too long to be able to play with db freely)
MySql databases use volumes at least the official image. You only need two containers withe same image using different named volumes:
docker create volume devdb
docker run --name devdb -v devdb:/var/lib/mysql -p 3306:3306-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
Then another for you:
docker create volume mydb
docker run --name mydb -v mydb:/var/lib/mysql -p 3307:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
And I am afraid that docker copies all the image content into the new container. But I am not sure about that.
Regards

Updating a container created from a custom dockerfile

Before anything, I have read this question and the related links in it, but I am still confused about how to resolve this on my setup.
I wrote my own docker file to install Archiva, which is very similar to this file. I created an image from the docker file using docker build -t archiva . and have a container which I run using docker run archiva. As seen in the docker file, the user data that I want to preserve is in a volume.
Now I want to upgrade to Archive 2.2.0. How can I update my container, so that the user-data thats in the volume is preserved? If I change the docker file by h=just changing the version number, and run the docker build again, it will just create another container.
Best practice
The option --volume of the docker-run enables sharing files between host and container(s) and especially preserve consistent [user] data.
The problem is ..
.. it appears that you are not using --volume and that the user data are in the image. (and that's a bad practice beacuse it leads to the situation you are in: unable to upgrade a service easily.
One solution (the best IMO) is
Back-up the user data
To use the command docker-cp: "Copy files/folders between a container and the local filesystem."
docker cp [--help] CONTAINER:SRC_PATH DEST_PATH
Upgrade your Dockerfile
By editing your Dockerfile and changing the version.
Use the --volume option
Use docker run -v /host/path/user-data:container/path/user-data archiva
And you're good!

Resources