Docker local Nexus responded `Empty reply from server` when Curl - docker

I tried setup a local Nexus using Docker as per the instruction in https://hub.docker.com/r/sonatype/nexus3/
I run the below and seems successful
docker run -d -p 8081:8081 --name nexus sonatype/nexus3
But when I curl http://localhost:8081/
It state
curl: (52) Empty reply from server
Did I miss anything?
UPDATE
Apparently when I run docker logs -f nexus
It shows
java.io.IOException: Function not implemented
at sun.nio.fs.LinuxWatchService.<init>(LinuxWatchService.java:64)
at sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at com.install4j.runtime.launcher.FullLauncherHelper.watchDirectory(FullLauncherHelper.java:52)
at com.install4j.runtime.launcher.util.SingleInstance.createStartupListener(SingleInstance.java:108)
at com.install4j.runtime.launcher.util.SingleInstance.check(SingleInstance.java:95)
at com.install4j.runtime.launcher.util.SingleInstance.checkForCurrentLauncher(SingleInstance.java:31)
at com.install4j.runtime.launcher.UnixLauncher.checkSingleInstance(UnixLauncher.java:88)
at com.install4j.runtime.launcher.UnixLauncher.main(UnixLauncher.java:67)
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00000040a5fe0678, pid=1, tid=0x00000040a5fe1700
#
# JRE version: OpenJDK Runtime Environment (8.0_282-b08) (build 1.8.0_282-b08)
# Java VM: OpenJDK 64-Bit Server VM (25.282-b08 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x00000040a5fe0678
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/sonatype/nexus/hs_err_pid1.log
Compiled method (c1) 5136 467 3 org.apache.felix.resolver.util.OpenHashMap::mix (12 bytes)
total in heap [0x000000400cf73f10,0x000000400cf74208] = 760
relocation [0x000000400cf74038,0x000000400cf74060] = 40
main code [0x000000400cf74060,0x000000400cf74120] = 192
stub code [0x000000400cf74120,0x000000400cf741b0] = 144
oops [0x000000400cf741b0,0x000000400cf741b8] = 8
metadata [0x000000400cf741b8,0x000000400cf741c0] = 8
scopes data [0x000000400cf741c0,0x000000400cf741d0] = 16
scopes pcs [0x000000400cf741d0,0x000000400cf74200] = 48
dependencies [0x000000400cf74200,0x000000400cf74208] = 8
#
# If you would like to submit a bug report, please visit:
# https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%208&component=java-1.8.0-openjdk
#
qemu: uncaught target signal 6 (Aborted) - core dumpe
UPDATE
When trying on the MacbookPro Intel machine, all works fine. The issue happens on my MacbookPro M1.
I tried export DOCKER_DEFAULT_PLATFORM=linux/amd64 as per https://stackoverflow.com/a/66900911/3286489, issue still persist

The sonatype nexus 3 compatibility issues persist.
We can use
https://hub.docker.com/r/klo2k/nexus3
for now
docker run -d -p 8081:8081 --name nexus klo2k/nexus3

Related

port 80 refused - digital ocean droplet web console w/ caprover instance

I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance

cannot run gazebo9 on docker with privileged on ubuntu 18.04

I am stuck on this for quite a while now i have tried searching and trying stuff but i am getting nowhere.
My setup is as follows:
Host
linux Distro: Archlinux
kernel version: 5.14.2
docker version: 20.10.8, build 3967b7d28e
nvidia driver version: 470.63.01-1
nvidia container toolkit version: 1.5.0-2 , cgroups disabled.
amd gpu driver: xf86-video-amdgpu 21.0.0-1
Container
base image: ubuntu:18.04
command line : docker run -it --rm --privileged --gpus all -e DISPLAY=$DISPLAY -e XAUTHORITY=~/.Xauthority --network host --volume /tmp/.X11-unix/:/tmp/.X11-unix --volume $XAUTHORITY:/root/.Xauthority gazebo:libgazebo9-bionic gazebo
Expected results
expected gazebo window to open with hardware acceleration, using privileged access.
Actual results
On using --privileged:
si_init_perfcounters: max_sh_per_se = 2 not supported (inaccurate performance counters)
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 149 ()
Minor opcode of failed request: 2
Serial number of failed request: 35
Current serial number in output stream: 36
Without --privileged and specifying graphic cards in --device manually:
gazebo window opens up with hardware acceleration and works smoothly as expected.
Detailed description
I was actually trying to run gazebo version 9 in a custom image which i had created using ubuntu:18.04 as base image. i referred to gazebo:libgazebo9-bionic,nvidia/cuda:11.4.1-cudnn8-devel-ubuntu18.04 and ros:melodic-desktop while writing the dockerfile. i even tried the same thing for gazebo 11 on the same base image and got the same issue as above. Whereas the exactly similar setup for ubuntu foxy works smoothly. i really need to use privileged because i am going to be working on hardware for a lot of time. please help me on how should this be fixed. thanks alot
P.S. Other GUI applications (rviz,moveit,etc) are running without any issues. Im getting this issue with gazebo only.
Ok found the solution!
Gazebo was working on osrf/ros:noetic-desktop-full but not on osrf/ros:melodic-desktop-full.
I got the exact same error:
X Error of failed request: BadAlloc (insufficient resources for operation)
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 149 ()
The solution was to update the MESA drivers on the ros:melodic image from version Mesa 20.0.8 to Mesa 22.0.2.
sudo add-apt-repository ppa:kisak/kisak-mesa -y
sudo apt update
sudo apt upgrade -y
If you want to check your current Mesa version:
sudo apt install mesa-utils
glxinfo | grep Mesa

Neo4j Docker Insufficient Memory

I'm having this weird issue with Neo4j in Docker. This is my docker-compose file:
version: '3'
services:
neo4j:
ports:
- "7473:7473"
- "7474:7474"
- "7687:7687"
volumes:
- neo4j_data:/data
image: neo4j:3.3
volumes:
neo4j_data: {}
I'm using Docker Toolbox on Windows 10. I have tested this on two different machines and it works perfectly. However, on one machine, the container always crashes a few seconds after creation. Here's the log for this container:
$ docker container logs database_neo4j_1
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /var/lib/neo4j/logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2018-11-18 12:50:41.954+0000 WARN Unknown config option: causal_clustering.discovery_listen_address
2018-11-18 12:50:41.965+0000 WARN Unknown config option: causal_clustering.raft_advertised_address
2018-11-18 12:50:41.965+0000 WARN Unknown config option: causal_clustering.raft_listen_address
2018-11-18 12:50:41.967+0000 WARN Unknown config option: ha.host.coordination
2018-11-18 12:50:41.968+0000 WARN Unknown config option: causal_clustering.transaction_advertised_address
2018-11-18 12:50:41.968+0000 WARN Unknown config option: causal_clustering.discovery_advertised_address
2018-11-18 12:50:41.969+0000 WARN Unknown config option: ha.host.data
2018-11-18 12:50:41.970+0000 WARN Unknown config option: causal_clustering.transaction_listen_address
2018-11-18 12:50:42.045+0000 INFO ======== Neo4j 3.3.9 ========
2018-11-18 12:50:42.275+0000 INFO Starting...
2018-11-18 12:50:48.632+0000 INFO Bolt enabled on 0.0.0.0:7687.
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 262160 bytes for Chunk::new
# An error report file with more information is saved as:
# /var/lib/neo4j/hs_err_pid6.log
#
# Compiler replay data is saved as:
# /var/lib/neo4j/replay_pid6.log
Looking add the additional log file /var/lib/neo4j/hs_err_pid6.log revealed the following information:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 262160 bytes for Chunk::new
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:390), pid=6, tid=0x00007fee96f9bae8
#
# JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 1.8.0_181-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.9.0
# Distribution: Custom build (Tue Oct 23 11:27:22 UTC 2018)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
As it turns out, my Docker machine was set to only 1GB of RAM, and the minimum requirement for Neo4j (according to their website) are 2GB. I was able to solve the problem by replacing my default Docker machine according to this guide and giving the new one 4GB of memory.
Essentially, I did the following:
$ docker-machine rm default
$ docker-machine create -d virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=4096 --virtualbox-disk-size=50000 default
you may also need to restart Docker:
docker-machine stop
exit
I haven't found anything about this problem online so far, so maybe this helps someone someday =).

Docker: Strider CD dashboard assets broken on installation

System
OSX 10.9.5
Docker version 1.4.1, build 5bc2ff8
Image niallo/strider (latest) a51ba391459b
Goal
Setup a docker instance as per this guide
First attempt
I followed the guide steps, but when I ran the final image:
docker run -d -p 3000:3000 -p 27000:27017 -p 44:22 niallo/strider
I was not able to access the dashboard through localhost
Second attempt: boot2docker ip
I did some googling and found these OSX specific instructions. Including the boo2docker ip:
curl $(boot2docker ip 2> /dev/null):3000
This succeeds in getting the dashboard html.
Outstanding problem
However in the browser the html asset loads, but the front end assets scripts/app.js and styles/style.css are all broken links.
curl $(boot2docker ip 2> /dev/null):3000/styles/styles.css
Note: all the other assets are fine
Does anyone have any insight? I really wanna play with docker!
More details
Info
bash-3.2$ docker info
Containers: 9
Images: 29
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 47
Execution Driver: native-0.2
Kernel Version: 3.16.7-tinycore64
Operating System: Boot2Docker 1.4.1 (TCL 5.4); master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
CPUs: 4
Total Memory: 1.961 GiB
Name: boot2docker
ID: DA3Y:GVFJ:6NO7:FFNL:RNLW:2QXY:UV3F:YWAS:OBFF:42YG:LRU7:CBHV
Debug mode (server): true
Debug mode (client): false
Fds: 18
Goroutines: 17
EventsListeners: 0
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Processes
IMAGE COMMAND PORTS
niallo/strider:latest "/usr/bin/supervisor 0.0.0.0:44->22/tcp, 0.0.0.0:3000->3000/tcp, 0.0.0.0:27000->27017/tcp
All the docker links in tutorial you have linked are broken. I can't even find the source dockerfile for this build and hence don't recommend using it and can't really help you debug.
However, The Strider github page recommends using docker-strider image for running strider in docker. The instructions seem straight forward but you can ask another question on SO if you get stuck.

Docker error at higher core counts on a multi core machine

I am running a Centos Container using docker on a RHEL 65 machine. I am trying to run an MPI application (MILC) on 16 cores.
My server has 20 cores and 128 GB of memory.
My application runs fine until 15 cores but fails with the APPLICATION TERMINATED WITH THE EXIT STRING: Bus error (signal 7) error when using 16 cores and up. At 16 cores and up these are the messages I see in the logs.
Jul 16 11:29:17 localhost abrt[100668]: Can't open /proc/413/status: No such file or directory
Jul 16 11:29:17 localhost abrt[100669]: Can't open /proc/414/status: No such file or directory
Jul 16 11:29:17 localhost abrt[100670]: Can't open /proc/417/status: No such file or directory
A few details on the container:
kernel 2.6.32-431.el6.x86_64
Official centos from docker hub
Started container as:
docker run -t -i -c 20 -m 125g --name=test --net=host centos /bin/bash
I would greatly appreciate any and all feedback regarding this. Please do let me know if I can provide any further information.
Regards

Resources