Docker WebLogic 11g container network issues after restart - docker

I'm setting up Oracle WebLogic 11g (10.3.6) in a Docker container (1.11) following Bruno's guide and docker files. I'm using History to grab the files for WL 11g since it's not official supported.
I have built all required components and am able to startup containers and WebLogic just fine, however after restarting the container WebLogic slows down considerably.
The container starts with: CMD ["startWebLogic.sh"]. If I use the WL Admin Console to stop the server, or use docker stop <container_name>, then use docker start <container_name> the container will come up, but Admin Console requests take 5+ minutes to complete.
Everything works fine on a fresh container using something like docker run -d --name wlsadmin --hostname wlsadmin -p 7001:7001 1036-domain but as soon as the container is restarted everything grinds to a halt.
I am not making any changes to the defaults. Simply starting a new container, stopping the container, and starting it back up again.
Does anyone have suggestions on how to troubleshoot this issue and get to the root cause?
I have also created WL 12.1 and WL 12.2 containers that all work successfully, even after restarts, but my legacy app only runs on WL 10.3.6, so I'm really trying to figure this out for 11g and am stumped.
Thanks for any help!

Turns out this is related to WebLogic and not Docker, namely how long it takes to generate random numbers.
Here is the solution
While the docker files did attempt to compensate for this, the implementation was not successful. I was able to fix the docker files and the Admin Console's performance returned to normal.

Related

Reboot Docker container from inside

I'm working with a Docker container with Debian 11 inside and a server.
I need to update this server and do other things on regular manne. I've written several scripts that can do it, but I encountered serious proble.
If I want to update the server and other packages I need to reboot the container.
I'm obviously able to do so from the computer Docker is installed on (in my case Docker Desktop running with WSL2 on Windows 10), I can reboot the container easily, but I need to automate it.
The simplest way will be to add the shutdown command to the scripts I've written. I was reading about it, but found nothing. Is there any way to reboot this container from the Debian inside it? If no, how can it be achieved and how complicated is it?
I was trying to invoke standard Linux commands to shutdown or reboot system on Debian inside container.
I expect a guide if it's possible and worth efforts.
The only way to trigger a restart of a container from within the container is to first set a restart policy on the container such as --restart=on-failure and then simply stop the container, i.e., let the main process terminate itself. The Docker engine would then restart the container.
This, however, is not the way Docker is intended to be used! Docker containers are not VMs and instead are meant to be ephemeral:
By "ephemeral", we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
This means, you shouldn't be updating the server within a running container but instead should update/rebuild the image and start a new container from it!

Docker Container Connection Issue After Exiting and Restarting

I'm using Docker Desktop for Mac to run a Docker container running FileMaker Server 19 on Ubuntu Server. When I start up Docker Engine from scratch, i.e., no daemons running, then start the container, all works as expected. I can open FileMaker's admin console in a browser and I can open the hosted database with FileMaker Pro client app.
But if I stop the container from running and quit Docker Desktop and try to run the container again it starts up but I can't establish connections to it either with the FileMaker Pro client or a browser. The solution I've found is to quit the Docker processes that continue to run in the background and make the Docker engine restart from scratch. This obviously isn't desirable and it indicates to me that something isn't configured correctly in the network connection to the container.
I'm new to Docker, so apologies in advance if I'm missing something very basic. I searched for an solution online but can't find one.

Attaching IDE to my backend docker container stops that container's website from being accessible from host

Summary
I'm on mac. I have several docker containers, I can run all of them using docker-compose up and everything works as expected: for instance, I can access my backend container by searching http://localhost:8882/ on my browser, since port 8882 is mapped to the same port on host by using:
ports:
- "8882:8882"
Problems start when I try to attach an IDE to the backend container so as to be able to develop "from inside" that container.
I've tried using vscode's plugin "Remote - Containers" following this tutorial and also pycharm professional, which comes with the possibility to run docker configurations out of the box. On both cases I had the same result: I run the IDE configuration to attach to the container and its local website suddenly stops working, showing "this site can't be reached".
When using pycharm, I noticed that Docker Desktop shows that the backend container changed its port to 54762. But I also tried that port with no luck.
I also used this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
to get the container ip (172.18.0.4) and tried that with both ports, again, same result.
Any ideas?
Pycharm configuration
Interpreter:
This works in the sense that I can watch the libraries code installed inside the container:
Run/Debug configuration. This configuration succeeds in the sense that I can start it and it seems to be attached correctly to the backend container... though the problem previously described appears.
So, there were many things taking part in this, since this is an already huge and full of technical debt project.
But the main one is that the docker-compose I was using was running the server using uwsgi on production mode, which tampered many things... amongst which were pycharm's ability to successfully attach to the running container, debug, etc.
I was eventually able to create a new docker-compose.dev.yml file that overrided the main docker-compose file, only to change the backend server command for flask on development mode. That fixed everything.
Be mindful that for some reason flask run command inside a docker container does not allow you to see the website properly until you pass a -host=0.0.0.0 option to it. More in https://stackoverflow.com/a/30329547/5750078

Docker Container still running after stop and remove

I'm currently trying to learn docker basics.
I'm looking to get used to the CLI. So I pulled/ran some docker containers, namely an Apache Http Server and the getting started container. Afterwards, I ran docker ps, checked the IDs and ran
docker stop <id>
docker rm <id>
Now I tried setting up a local postgres db. So I pulled postgres and pgadmin and made them run on ports 5432 and 80 respectively.
Thinking, it should all be set up, i visited localhost:80 and was welcomed by the getting started page again. Despite running stop and rm.
I thought maybe something went wrong and the ports weren't freed by Docker or something similar. So I completely restarted the docker service and stopped every process related to Docker I could find. Finally I restarted my computer.
Looking at the task manager now, I could not find any traces of Docker processes or such running on my machine. Despite that, if i visit localhost:80 I am still greeted with the getting started page. And visiting localhost:8080 I am still greeted with Apaches "It works!" message.
I am at a loss here, since there is no Docker Service running, yet I am still accessing the Apache Server running locally.
Edit 1: I do not have any running servers on my local machine. I never installed, nor started any Apache http servers on my machine.
Hopefully reproducible example (the exact steps I took):
docker pull httpd
docker run -d -p 80:80 docker/getting started
Verify, that getting started is running on localhost:80
# This shows the getting started process running with id <id>
docker ps
# Stopping and removing the process
docker stop <id>
docker rm <id>
# Remove the getting started image
docker images rm <your_id>
Restart PC, go to localhost:80.
For me, this shows the getting started page again.
Then close any processes related to Docker (e.g. Docker Desktop, etc). Make sure, that com.Docker.backend.exe is also not running.
I had a similar issue. Clearing the browser cache fixed it for me. (as recommended by #derpischer)
For example in firefox: go to settings. Under "Cookies and Site Data", use the "Clear Data" button and make sure "Cached Web Content" is selected.

Reconnect to running bash in Docker Container

I am not much of a docker expert but i managed to start my python scirpt in a docker bash. After connection errors with the network I loose my connection to the ubuntu server where the docker is running on.
After reconnect to the server, I can still connect to the docker container which is still running, but I am not able to connect to the bash where my Python script is running in.
So, how to reconnect to the docker containers bash where my script is running in, to see its progress?
use docker logs (documents)
*you will not be able to use to end the python this way
as a general rule there is no way to "recover" a lost bash session.
A workaround could be to run a script inside a terminal multiplexer like screen or tmux, which allows you to attach / recover the session from multiple terminals.
I fear the current process is gone, you can only check the logs using docker logs, but chances are your job died with your session (unless you nohuped it).

Resources