Couchdb docker on WSL2: no setup possible - docker

I was trying to have some fun with Docker and test out CouchDB.
I'm working on a Docker on Windows 10 running on WSL2.
I understood that it was easy enough to run something like:
docker run -d --name my-couchdb -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password couchdb:latest -p 5984:5984
And actually, it is running "fine."
Fine in the sense that I can see from the logs that it is running ok, with no error.
But at this point, I wanted to set it up, going to http://127.0.0.1:5984/_utils#setup or http://[::1]:5984/_utils#setup
As you can imagine, I cannot access it.
Any idea on how to troubleshoot it?
I'm no expert in docker, and it is the first time on CouchDB, so I wanted to have a quick run, but I'm already stuck.
Thanks,
Ro

As per documentation here https://docs.docker.com/engine/reference/commandline/run/
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
item you put after the image will be act as arguments which is not what you want here.

Related

Run Solr Docker Image on two different ports in a machine at same time

Is there a way to run Solr Docker Image on two different ports in a machine at same time?
I was referring this: https://hub.docker.com/_/solr but didn't find much regarding my use case. Can anyone suggest a solution to achieve it?
Yes, you can. Do
docker run -p 8983:8983 -d solr
docker run -p 8984:8983 -d solr
and you'll have one instance on http://localhost:8983/ and one on http://localhost:8984/

How to store configuration and database outside orientdb docker container

I'm stuck trying to store OrientDB database and configuration outside of the docker container I'm running. This is the first time using both docker and orientdb so my confusion is multilevel.
Based on https://hub.docker.com/_/orientdb/ I have successfully ran the command docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -e ORIENTDB_ROOT_PASSWORD=rootpwd orientdb but I'm stuck trying to specify where on my local disk to store data and configuration so its not lost when the container is stopped/removed.
I tried adding the -v <databases_path>:/orientdb/databases option but to no avail. I'm probably missing something very basic (since this is my first hands on experience with docker and orientdb). Trying to set up volumes in docker desktop and other trial and error tests have also failed.
Can anyone help? Or point me to some tutorial where I can learn because I'm stuck.
Thanks to #nulldroid I finally figured it out. It was the syntax which messed me up as usual. The following command worked for me. No need to set up volumes etc just a correct formatted path to the directory I already had created using the "/d/" in the beginning for windows "D:"
docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -v /d/docker/test1/databases:/orientdb/databases -e ORIENTDB_ROOT_PASSWORD=root orientdb:latest

How to correctly work with docker on a very basic level?

I'm using Docker for Windows (Education Edition with Hyper-V) and am fairly new to Docker. My workflow feels a little bit complicated and I think there are better ways. Here's what I do:
When I develop with Docker containers, I add a Dockerfile to my project first.
Then I am going to build the container by running a command like docker build -t containername .
When Docker is done building, I am going to run the container with a command like docker run -p 8080:8080 containername (sometimes I add a volume at this point)
This runs the container and leaves my Powershell in a state where I can read debug messages and so on from the container.
Then I'm testing and developing the application.
Once I'm done developing and testing, I need to CTRL + C in order to exit the running container.
Now comes the tricky part: Say, I forgot something and want to test what I forgot to test right away. I would again run docker build -t containername . BUT docker would now tell me, that the port is already taken. So I continue like this:
I search for my container with this command: docker ps
Once I found the name (i.e. silly_walrusbeard) I type docker stop silly_walrusbeard. Now I can run docker build -t containername . again and the port is now free.
How could I simplify this workflow? Is there an alternative to CTRL+C that also stops the container? Thanks for your suggestions!
list all current containers with docker ps -a. Kill them with docker kill <ID> and maybe docker rm <ID>.
And when you run new containers use the --rm to free ports (among other things) automatically when the container stops:
docker run --rm -it containername
(I usually need the -it when running shells, but I'm not sure about powershell. Maybe you don't need it)

Any way to retrieve the command originally used to create a Docker container?

This question seems to have been often asked, but I cannot find any answer that correctly and clearly specifies how to accomplish this.
I often create test docker containers that I run for a while. Eventually I stop the container and restart it simply using docker start <name>. However, sometimes I am looking to upgrade to a newer image, which means deleting the existing container and creating a new one from the updated image.
I've been looking for a reliable way to retrieve the original 'docker run' command that was used to create the container in the first place. Most responses indicate to simply use docker inspect and look at the Config.Cmd element, but that is not correct.
For instance, creating a container as:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Qwerty123<(*' -e TZ=America/Toronto -p 1433:1433 -v c:/dev/docker/mssql:/var/opt/mssql --name mssql -d microsoft/mssql-server-linux
using docker inspect will show:
$ docker inspect mssql | jq -r '.[0]["Config"]["Cmd"]'
[
"/bin/sh",
"-c",
"/opt/mssql/bin/sqlservr"
]
There are many issues created on github for this same request, but all have been closed since the info is already in the inspect output - one just has to know how to read it.
Has anyone created a utility to easily rebuild the command from the output of the inspect command? All the responses that I've seen all refer to the wrong info, notably inspecting the Config.Cmd element, but ignoring the Mounts, the Config.Env, Config.ExposedPorts, Config.Volumes, etc elements.
There are few utilities out there which can help you.
Give it a try
https://github.com/bcicen/docker-replay
https://github.com/lavie/runlike
If you want to know more such cool tools around docker check this https://github.com/veggiemonk/awesome-docker
Of course docker inspect is the way to go, but if you just want to "reconstruct" the docker run command, you have
https://github.com/nexdrew/rekcod
it says
Reverse engineer a docker run command from an existing container (via docker inspect).
Another way is Christian G answer at
How to show the run command of a docker container
using bash-preexec
I've had the same issue and ended up looking at .bash_history file to find the command I used.
This would give you all the docker create commands you'd run;
grep 'docker create' .bash_history
Note: if you ran docker create in that same session you'll need to logout/login for the .bash_history to flush to disk.

How can I run containers detached and have them automatically removed when they exit?

Why are -d and --rm conflicting arguments in Docker?
$ docker run -d --rm image
Conflicting options: --rm and -d
I have a number of containers that run unit/functional/integration tests. The Docker containers start, run the tests, and then stop. I run them detached since I only care about the results, but I'd also like the containers to be removed after the container exits. What would be a good way to do this?
Currently (Docker v1.1.1), this functionality isn't supported.
The developer of the --rm feature explains the reasons for that in his PR #1589:
It's currently supported only when -d isn't provided. It doesn't make sense to automatically remove a container created via docker run -d. There are two reasons why this is implemented this way: 1) we might want to retrieve some kind of exit status or logs before removing the container 2) making this run on the server side is difficult in the current architecture.
The good news are that someone already opened an issue to fix this, so you might follow the discussion there.
Also, a workaround isn't too complicated, you can run your containers using a wrapper script as follows:
ID=$(docker run -d ubuntu sleep 3)
docker wait $ID
docker rm $ID
These options no longer conflict as of Docker version 1.13.0
There was a pull request that moves the --rm option daemon-side and allows for running containers detached with the removal option: https://github.com/docker/docker/pull/20848

Resources