Running Apache Atlas via Docker now fails - docker

I am trying to run the https://hub.docker.com/r/sburn/apache-atlas docker image. I have been using this image for months now with no issue. Starting today, when I run the image using this command:
docker run -d -p 21000:21000 --name atlas sburn/apache-atlas
the container gets built and runs. When Apache Atlas starts, it now throws an error:
Configured for local HBase.
2023-02-08 21:19:34 Starting local HBase... 2023-02-08 21:19:34 Local
HBase started! 2023-02-08 21:19:34 2023-02-08 21:19:34 Configured for
local Solr. 2023-02-08 21:19:34 Starting local Solr... 2023-02-08
21:19:34 Local Solr started! 2023-02-08 21:19:34 2023-02-08 21:19:34
Creating Solr collections for Atlas using config:
/apache-atlas/conf/solr 2023-02-08 21:19:34 2023-02-08 21:19:34
Starting Atlas server on host: localhost 2023-02-08 21:19:34 Starting
Atlas server on port: 21000 2023-02-08 21:20:21
............................................... 2023-02-08 21:20:21
Apache Atlas Server process started! 2023-02-08 21:20:21 2023-02-08
21:20:21 Running Apache Atlas with PID 775... 2023-02-08 21:20:21
2023-02-08 21:20:21 at org.apache.atlas.Atlas.main(Atlas.java:133)
2023-02-08 21:20:21 2023-02-09 02:20:21,286 ERROR - [main:] ~ Thread
Thread[main,5,main] died (NIOServerCnxnFactory$1:92) 2023-02-08
21:20:21 org.apache.atlas.exception.AtlasBaseException:
EmbeddedServer.Start: failed! 2023-02-08 21:20:21 at
org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:116)
2023-02-08 21:20:21 at org.apache.atlas.Atlas.main(Atlas.java:133)
2023-02-08 21:20:21 Caused by: java.lang.NullPointerException
2023-02-08 21:20:21 at
org.apache.atlas.util.BeanUtil.getBean(BeanUtil.java:36) 2023-02-08
21:20:21 at
org.apache.atlas.web.service.EmbeddedServer.auditServerStatus(EmbeddedServer.java:129)
2023-02-08 21:20:21 at
org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:112)
2023-02-08 21:20:21 ... 1 more
Nothing has changed on my computer. I've tried deleting the container and rerunning docker. I've clean/purged docker and reset Kubernetes, but I am still getting this error when it runs for the first time. I can go to: http://localhost:21000/ but it just displays:
HTTP ERROR 503 Service Unavailable
URI: /
STATUS: 503
MESSAGE: Service Unavailable
SERVLET: -
Does anyone know why this error would just start coming up now and how to fix it? The docker image doesn't seem to have been updated in awhile.

Related

Want to upgrade my docker from 19.03.5 to latest where many countainers are running

I am new to docker and linux. I am running docker 19.03.5. I had sonarqube 7.9.2 installed but I had to upgrade the sonarqube. After the upgrade I hit this issue. Now as per my understaning I don't have any choice but to upgrade docker too. Docker has many contaniers running and I am afraid that this upgrade will effect others work- Any suggestion are welcome!
After a search I get that all the containers will restart after docker start but I still want to confirm if there is anything I should keep in mind?
Issue to resolve is:
After upgrade the sonarqube contanier close with error
Dropping Privileges
2022.05.02 12:18:50 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2022.05.02 12:18:50 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:36393]
2022.05.02 12:18:50 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
could not find java in ES_JAVA_HOME at /usr/lib/jvm/java-11-openjdk/bin/java
2022.05.02 12:18:50 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 1
2022.05.02 12:18:50 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2022.05.02 12:18:50 INFO app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2022.05.02 12:18:50 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
[1]: https://github.com/SonarSource/docker-sonarqube/issues/493
[1]: https://github.com/SonarSource/docker-sonarqube/issues/493
and to solve this I have to upgrade Docker as mentioned here. I am open for any other way to solve the issue.

nginx permission denied accessing puma socket that does exist in the correct location

On a Digital Ocean droplet running Ubuntu 21.10 impish I am deploying a bare bones Rails 7.0.0.alpha2 application to production. I am setting up nginx as the reverse proxy server to communicate with Puma acting as the Rails server.
I wish to run puma as a service using systemctl without sudo root privileges. To this effect I have a puma service setup in the users home folder located at ~/.config/systemd/user, the service is enabled and runs as I would expect it to run.
systemctl status --user puma_master_cms_production
reports the following
● puma_master_cms_production.service - Puma HTTP Server for master_cms (production)
Loaded: loaded (/home/comtechmaster/.config/systemd/user/puma_master_cms_production.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-18 22:31:02 UTC; 1h 18min ago
Main PID: 1577 (ruby)
Tasks: 10 (limit: 2338)
Memory: 125.1M
CPU: 2.873s
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/puma_master_cms_production.service
└─1577 puma 5.5.2 (unix:///home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock)
Nov 18 22:31:02 master-cms systemd[749]: Started Puma HTTP Server for master_cms (production).
The rails production.log is empty.
The puma error log shows the following
cat log/puma_error.log
=== puma startup: 2021-11-18 22:31:05 +0000 ===
The pid files exist in the application roots shared/tmp/pids folder
ls tmp/pids
puma.pid puma.state
and the socket that nginx needs but is unable to connect to due to permission denied exists
ls -l ~/apps/master_cms/shared/tmp/sockets/
total 0
srwxrwxrwx 1 comtechmaster comtechmaster 0 Nov 18 22:31 puma_master_cms_production.sock
nginx is up and running and providing a
502 bad gateway
response. The nginx error log reports the following error
2021/11/18 23:18:43 [crit] 1500#1500: *25 connect() to unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock failed (13: Permission denied) while connecting to upstream, client: 86.160.191.54, server: 159.65.50.229, request: "GET / HTTP/2.0", upstream: "http://unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock:/500.html"
sudo nginx -t reports the following
sudo nginx -t
nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successfu
just to be pedantic both an ls and a sudo ls to the path reported in the error shows
ls /home/comtechmaster/apps/master_cms/shared/tmp/sockets/
puma_master_cms_production.sock
as expected so I am stumped to understand why nginx running as root using sudo service nginx start is being denied access to a socket that exists, that is owned by the local user rather than root.
I expect the solution is going to be something totally obvious but I can not see what
This problem ended up being related to the folder permissions for the users home folder and specifically a change in the way Ububntu 20.10 sets permissions differently to previous versions of ubuntu, or at least a difference in the way the DigitalOcean setup scripts behave.
This was resolved with a simple command line chmod o=rx from the /home against the user folder concerned e.g.
cd /home
chmod o=rx the_home_folder_for_user

SonarQube docker keeps stopping

I am testing out SonarQube locally on my machine using docker, however the docker container keeps stopping, not sure why this is the case. Am using Mac and am not sure if the Java version affects SonarQube but am running Java version 11 on my machine.
These are the logs I am getting
2021.07.22 16:49:46 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2021.07.22 16:49:46 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:39173]
2021.07.22 16:49:46 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2021.07.22 16:49:47 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2021.07.22 16:49:47 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 1
2021.07.22 16:49:47 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2021.07.22 16:49:47 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
This is the command I used to run the docker container
docker run --name sonarqube --restart always -p 9000:9000 -d sonarqube
What am I missing?
I updated the version of docker on my machine to the latest(Been skipping for almost a year) and it worked

Port error when setting up Dev mode of Hyperledger Fabric

I'm setting up the development environment following the instructions on Hyperledger fabric's official website:
https://hyperledger-fabric.readthedocs.io/en/latest/peer-chaincode-devmode.html
I have started the orderer successfully using:
ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer
This command didn't work at first but it worked after I cd fabric/sampleconfig
2020-12-21 11:23:15.084 CST [orderer.common.server] Main -> INFO 009 Starting orderer: Version: 2.3.0 Commit SHA: dc2e59b3c Go version: go1.15.6 OS/Arch: darwin/amd64
2020-12-21 11:23:15.084 CST [orderer.common.server] Main -> INFO 00a Beginning to serve requests
but when I start the peer using:
export PATH=$(pwd)/build/bin:$PATH
export FABRIC_CFG_PATH=$(pwd)/sampleconfig
export FABRIC_LOGGING_SPEC=chaincode=debug
export CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
peer node start --peer-chaincodedev=true
An error is spotted:
FABRIC_LOGGING_SPEC=chaincode=debug
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
peer node start --peer-chaincodedev=true
2020-12-21 11:25:13.047 CST [nodeCmd] serve -> INFO 001 Starting peer: Version: 2.3.0 Commit SHA: dc2e59b3c Go version: go1.15.6 OS/Arch: darwin/amd64 Chaincode: Base Docker Label: org.hyperledger.fabric Docker Namespace: hyperledger
2020-12-21 11:25:13.048 CST [peer] getLocalAddress -> INFO 002 Auto-detected peer address: 10.200.83.208:7051
2020-12-21 11:25:13.048 CST [peer] getLocalAddress -> INFO 003 Host is 0.0.0.0 , falling back to auto-detected address: 10.200.83.208:7051 Error: failed to initialize operations subsystem: listen tcp 127.0.0.1:9443: bind: address already in use
this is the error:
Error: failed to initialize operations subsystem: listen tcp 127.0.0.1:9443: bind: address already in use
I checked this issue and it seems this happens because the peer node is using the same port 9443 as the orderer node for the same service. How can I get the two nodes running separately? It seems the docker is running as well.
If you see your error, you can easily follow
Error: failed to initialize operations subsystem: listen tcp 127.0.0.1:9443: bind: address already in use
It is said that the 9443 port is already in use.
It seems that you are not running the orderer and peer as separate containers on the docker-based virtual network, but running on the host pc.
This eventually seems to conflict with two servers requesting one port 9443 on your pc.\
Referring to the configuration below of fabric-2.3/sampleconfig, you can see that each port 9443 is assigned to the server. Assigning one of them to the other port solves this.
fabric-2.3/sampleconfig/orderer.yaml
configuration of orderer
# orderer.yaml
...
Admin:
# host and port for the admin server
ListenAddress: 127.0.0.1:9443
...
fabric-2.3/sampleconfig/core.yaml
configuration of peer
# core.yaml
...
operations:
# host and port for the operations server
# listenAddress: 127.0.0.1:9443
listenAddress: 127.0.0.1:10443
...
This is not a direct answer to the port mapping / collision issue, but we've had great success using the new Kubernetes Test Network as a development platform running on a local system with a virtual Kubernetes cluster running in KIND (Kubernetes in Docker).
In this mode, applications can be developed using the Gateway client (exposed via a port forward or ingress), and smart contracts running As a Service can be launched either in the cluster OR run on the local host OS in a container, binary, or launched in a debugger.
The documentation for the development setup is still sparse, but we'd love to hear feedback on the overall approach, as it offers an exponentially better experience for working with a test network in a development context. In general the process of "port juggling" with Compose is no longer relevant when working on a local Kubernetes cluster. In this mode, you can run services on the host network, instructing peers/orderers/etc. to connect to the remote process running on the host OS.

Fedora + Apache + Phusion passenger + Rails shows default apache page (always)

Even though there is a huge amount of information about this subject I'm still stuck at the apache default page shown by the server. The worse part is that everything seems to work properly.
I've installed everything using dnf, even the ruby gems
I'm using
Fedora 22 Server minimal installation 4.2.6-200.fc22.x86_64
Apache 2.4.17
Phusion passenger 4.0.53
Packages that I've installed
# dnf install nodejs ruby rubygem-rails ruby-devel rubygem-json rubygem-debug_inspector rubygem-byebug rubygem-sqlite3 httpd mod_passenger
Passenger config file (edited after #bobomoreno answer)
$ cat /etc/httpd/conf.d/passenger.conf
<IfModule mod_passenger.c>
PassengerRoot /usr/share/passenger//phusion_passenger/locations.ini
PassengerRuby /usr/bin/ruby
PassengerEnable on
<VirtualHost *:80>
ServerName 10.10.15.219
ServerAdmin los_true#gmail.com
DocumentRoot /var/www/html/los_true/public
ErrorLog /var/log/httpd/tsc-error.log
CustomLog /var/log/httpd/tsc-access.log common
RackEnv development
<Directory /var/www/html/los_true/public>
Allow from all
Options -MultiViews
Require all granted
</Directory>
</VirtualHost>
</IfModule>
Firewall working fine
# firewall-cmd --list-all
public (default, active)
interfaces: eth0
sources:
services: dhcpv6-client http https mdns ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
SELinux permissive (until everything else work fine, then I will change it to enforcing)
$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 29
Apache server running without errors
$ systemctl status -l httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
Active: active (running) since sáb 2015-11-28 20:41:05 CLT; 24min ago
Main PID: 25649 (httpd)
Status: "Total requests: 1; Idle/Busy workers 100/0;Requests/sec: 0.00068; Bytes served/sec: 3 B/sec"
CGroup: /system.slice/httpd.service
├─25649 /usr/sbin/httpd -DFOREGROUND
├─25650 /usr/libexec/nss_pcache 1802248 off /etc/httpd/alias
├─25671 PassengerWatchdog
├─25674 PassengerHelperAgent
├─25679 PassengerLoggingAgent
├─25689 /usr/sbin/httpd -DFOREGROUND
├─25690 /usr/sbin/httpd -DFOREGROUND
├─25691 /usr/sbin/httpd -DFOREGROUND
├─25692 /usr/sbin/httpd -DFOREGROUND
├─25693 /usr/sbin/httpd -DFOREGROUND
└─25705 /usr/sbin/httpd -DFOREGROUND
nov 28 20:41:04 ip210.15.priv.inf.utfsm.cl systemd[1]: Starting The Apache HTTP Server...
nov 28 20:41:05 ip210.15.priv.inf.utfsm.cl systemd[1]: Started The Apache HTTP Server.
Passenger started and running with no errors listed
# cat /var/log/httpd/error_log | grep 'Phusion\|Passenger'
[ 2015-11-28 21:43:34.3687 26203/7f416653f740 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.26198/generation-0/request
[ 2015-11-28 21:43:34.3807 26209/7ff9a135b840 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.26198/generation-0/logging
[ 2015-11-28 21:43:34.3817 26200/7fdd4c2bf740 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
[ 2015-11-28 21:43:34.4808 26223/7fb79dc8d740 agents/HelperAgent/Main.cpp:650 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.26198/generation-1/request
[ 2015-11-28 21:43:34.4912 26229/7f643095a840 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.26198/generation-1/logging
[ 2015-11-28 21:43:34.4916 26220/7efd30072740 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
[Sat Nov 28 21:43:34.498645 2015] [mpm_prefork:notice] [pid 26198] AH00163: Apache/2.4.17 (Fedora) mod_auth_gssapi/1.3.0 mod_nss/2.4.16 NSS/3.19.3 Basic ECC mod_wsgi/4.4.8 Python/2.7.10 Phusion_Passenger/4.0.53 configured -- resuming normal operations
# cat /var/log/httpd/tsc-error.log
#
Rails application working (It was tested in a local machine and it worked fine)
$ cd /var/www/html/los_true
$ ./bin/bundle install
Using rake 10.4.2
Using i18n 0.7.0
...
Using web-console 2.2.1
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.
$ ./bin/rake db:migrate
$
So I have truly no idea what I'm doing wrong. Also there is only one weird thing about passenger, it says that is not running but it redirects the public folder fine (I can see the *.html files located in the public folder if I put them in the URL, http://10.10.15.219/404.html for example)
$ passenger-status
ERROR: Phusion Passenger doesn't seem to be running.
So please if anyone have any idea of what's the problem in here please help me :c because I really don't know what else to do
Looks like you are missing
passenger_enabled on;
in your virtual host definition
I finally found the problem in this link :'D
It turned out to be the autoindex module the source of evil. If you comment the line
...
LoadModule autoindex_module modules/mod_autoindex.so
...
in /etc/httpd/conf.modules.d/00-base.conf and remove the file autoindex.conf
# mv /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/autoindex.conf.bak
you will get the rail views working :')

Resources