openstack instance task_state stuck in powering off, how reset it to None - task

An instance execute shutdown in operation system, and the power_state is Shutdown, but the vm_state is active without shutdown the instance through API, and Why the task_state is always powering-off, event never execute other operation by API.
Maybe the compute node memory is not enough at that time, and this instance was been killing, but now the compute node memory is satisfy.
How can I set the task_state to None without change the database info?
openstack server show
| OS-EXT-STS:power_state | Shutdown |
| OS-EXT-STS:task_state | powering-off |
| OS-EXT-STS:vm_state | active |
openstack server reboot --hard
Cannot 'reboot' instance instance-ID while it is in task_state powering-off (HTTP 409) (Request-ID: reqest-ID)
openstack server set --property OS-EXT-STS:task_state=None instance not work, the openstack server show output not change.

I solve it by restart the nova-compute server docker restart nova_compute.
Reference https://bugs.launchpad.net/nova/+bug/1593186

Related

How to specify my connection to neo4j as a leader role in K8s

I deployed a 3-core (by default) Neo4j in K8s via this helm chart. I'm quite a newbie to neo4j.
I'm using neo4jrb in a Ruby on Rails project.
When I tried to connect the neo4j service to write data. I often (not always) met this error
Neo4j::Core::CypherSession::CypherError: Cypher error:
Neo.ClientError.Cluster.NotALeader: No write operations are allowed directly on this database. Writes must pass through the leader. The role of this server is: FOLLOWER
I read this article Querying Neo4j Clusters. Then I realized there is one leader and two follower core created by the helm chart. In the cypher-shell, when I run
CALL dbms.cluster.overview() YIELD id, role RETURN id, role
I got
+-----------------------------------------------------+
| id | role |
+-----------------------------------------------------+
| "acce2b2c-53ae-498c-a49b-84f42897445e" | "FOLLOWER" |
| "03cabb09-de1a-40cc-b8b0-bb02981cf551" | "FOLLOWER" |
| "1aa96add-f5cd-43a1-9fc6-2a5360668bb7" | "LEADER" |
+-----------------------------------------------------+
So I should connect to the LEADER when I try to write data. And I know a cluster can't be leader permanently. If the current leader is down, then the follower will become a new leader.
I once thought bolt+routing to a causal cluster may be an easy way to fix my issue. When I went back to the ruby client, I found it doesn't support bolt+routing for now.
What should I do now? I can't configure a LoadBalancer. I have access to writing a config for Ingress.
I'm not sure that neo4jrb supports bolt+routing.
You could try to use the java driver from graalvm's truffleruby, see:
https://github.com/michael-simons/neo4j-graalvm-polyglot-examples

Container always reach same backend on replicated services

I'm deploying a 3 tier application using docker swarm, similar to:
--> BACK01-01 -- --> BACK02-01
| | |
FRONTEND-01 ----------------> BACK01-02 --------> BACK02-02
| | |
--> BACK01-03 -- --> BACK02-03
Frontend Back Service 01 Back Service 02
This is a 3 node swarm, where each *-01 service task is running on the manager-node, each *-02 service task is running on worker-node-01 and each *-03 service task is running on worker-node-02
All communication between services are using GRPC and creating a new connection per request.
All I want with this is to distribute the load over every replica.
Sequentially I made a request to frontend which make a request to back01, which make a request to back02. But after 50 requests, all inner requests where made to back01-03 and back02-03 and the other were never reached.
I using default service configuration and the stack was deployed using portainer GUI
Is there anything that I'm not understanding?
P.S: I had tested service load balance with a simple HTTP and GRPC server returning the container id, with 4 replicas in one node, and it was returning each one sequentially.

Does Jhipster support sticky sessions with docker stack or HAproxy?

Running a sample jhipster app (found at : https://github.com/ehcache/ehcache3-samples/tree/master/fullstack) , when I deployed it to a docker swarm (swarm mode) with docker stack, it worked fine and I could log-in
But when I started "scaling" the web app, I found out the session was lost whenever my request would hit another container than the first one.
Actually, I even saw in the logs :
worker2 | org.springframework.security.web.authentication.rememberme.CookieTheftException: Invalid remember-me token (Series/token) mismatch. Implies previous cookie theft attack.
worker2 | at org.terracotta.demo.security.CustomPersistentRememberMeServices.getPersistentToken(CustomPersistentRememberMeServices.java:173)
worker2 | at org.terracotta.demo.security.CustomPersistentRememberMeServices.processAutoLoginCookie(CustomPersistentRememberMeServices.java:83)
worker2 | at org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices.autoLogin(AbstractRememberMeServices.java:130)
while I was trying to logging in again...
Is there something I need to setup to have the load balancer treat the session as unique ?

Difference in web server paradigms (Apache vs. Reverse proxy + Web server)

I have starting developing with Ruby on Rails, and I have encountered what it has been described as a different paradigm when it comes to web servers.
Old paradigm (apache)
=====================
+--- web process fork
|
[requests] -----+--- web process fork
|
+--- web process fork
New paradigm (Puma + Nginx)
===========================
+---> web app process 1 --> threads
|
[requests] <--> [reverse proxy server] --+---> web app process 2 --> threads
|
+---> web app process 3 --> threads
On the article I was reading, it didn't try to explain the differences between this 2 paradigms, and the benefits of one over the other. This is what I am interested in.
What is the point of this new paradigm used on Ruby on Rails apps? What advantages has over the old HTTP daemon way? What are its disadvantages?
The application server architecture has the following traits:
Across the board, I’m in favor of running Web applications as app servers and reverse-proxying to them. It takes minimal effort to set this up, and the benefits are plenty: you can manage your web server and app separately, you can run as many or few app processes on as many machines as you want without needing more web servers, you can run the app as a different user with zero effort, you can switch web servers, you can take down the app without touching the web server, you can do seamless deployment by just switching where a fifo points, etc. Welding your application to your web server is absurd and there’s no good reason to do it any more.
compared to the classic model:
PHP is naturally tied to Apache. Running it separately, or with any other webserver, requires just as much mucking around (possibly more) as deploying any other language.
php.ini applies to every PHP application run anywhere. There is only one php.ini file, and it applies globally; if you’re on a shared server and need to change it, or if you run two applications that need different settings, you’re out of luck; you have to apply the union of all necessary settings and pare them down from inside the apps themselves using ini_set or in Apache’s configuration file or in .htaccess. If you can. Also wow that is a lot of places you need to check to figure out how a setting is getting its value.
Similarly, there is no easy way to “insulate” a PHP application and its dependencies from the rest of a system. Running two applications that require different versions of a library, or even PHP itself? Start by building a second copy of Apache.
The “bunch of files” approach, besides making routing a huge pain in the ass, also means you have to carefully whitelist or blacklist what stuff is actually available, because your URL hierarchy is also your entire code tree. Configuration files and other “partials” need C-like guards to prevent them from being loaded directly. Version control noise (e.g., .svn) needs protecting. With mod_php, everything on your filesystem is a potential entry point; with an app server, there’s only one entry point, and only the URL controls whether it’s invoked.
You can’t seamlessly upgrade a bunch of files that run CGI-style, unless you want crashes and undefined behavior as users hit your site halfway through the upgrade.
Other paradigms include:
The web application is a web server, and can accept HTTP requests directly. Examples of this model:
Almost all Node.js and Meteor JS web applications (https://lookback.io).
The Trac bug tracking software, running in its standalone server(https://trac.webkit.org).
The web application does not speak HTTP directly, but is connected directly to the web server through some communication adapter. CGI, FastCGI and SCGI are good examples of this. (web.py, flask, sinatra)
start()
-----------------------------
| |
| init() |
NEW ->-- INITIALIZING |
| | | | -------------------- STARTING_PREP -->- STARTING -->- STARTED -->--- |
| | | | |
| |destroy()| | |
| -->--------- STOPPING ------>----- STOPPED ----->-----
| \|/ ^ | ^
| | stop() | | |
| | -------------------------- | |
| | | | |
| | | destroy() destroy() | |
| | FAILED ---->------ DESTROYING -------------------- \|/ |
| DESTROYED |
| |
| stop() |
--->------------------------------>------------------------------
On Heroku, apps are completely self-contained and do not rely on runtime injection of a webserver into the execution environment to create a web-facing service. Each web process simply binds to a port, and listens for requests coming in on that port. The port to bind to is assigned by Heroku as the PORT environment variable.
References
PHP: a fractal of bad design
Phusion Passenger: Design and Architecture
Optimizing PHP Application Concurrency | Heroku Dev Center
Runtime Principles | Heroku Dev Center

Not able to start ActiveMQ as a service under Windows Server 2008 R2

We are trying to launch ActiveMQ as a service on a Windows Server 2008 R2 server, but we get a "1067 error" and in the log file we see this:
FATAL | wrapper | 2012/03/12 16:34:54 | Critical error: wait for JVM process failed
STATUS | wrapper | 2012/03/12 16:41:00 | --> Wrapper Started as Service
STATUS | wrapper | 2012/03/12 16:41:00 | Launching a JVM...
FATAL | wrapper | 2012/03/12 16:41:00 | Unable to execute Java command. Accesso negato. (0x5)
FATAL | wrapper | 2012/03/12 16:41:00 | "C:\Program Files (x86)\Java\
"accesso negato" means "access is denied" (Italian). The Java path seems correct. We tried all these combinations:
C:\Program Files (x86)\Java\jre6\bin
C:\Program Files (x86)\Java\jre7\bin
C:\Program Files (x86)\Java\jre7\jdk1.7.0_03\jre\bin
folders in which the java*.exe executables are present (we have installed JRE6, JRE7 and JDK; before trying JRE7/JDK1.7, we had installed just JRE6).
All the access privileges seem to be assigned to the folders and in the property of ActiveMQ service we gave the rights of Administrator (maximum rights). In a forum, we found out to remark the field "jetty" in the configuration file, but it hasn't solved anything.
Is there something wrong with the way in which we installed/launched ActiveMQ or incompatibilities with our environment/operating system?
I just ran into this issue. The issue for me was that RabbitMQ was also running as a service and binding to the same port. Stopping RabbitMQ allowed ActiveMQ to start. I could have also changed ports.
Run Wrapper.exe in a command window to see the error.
If it is a port issue you can run the command netstat -a -b to see what is binding to the port.
Here are answers to how to change your port if needed:
How can I change default port number of Activemq
Try to run the bat file (\bin\win32\activemq.bat) as administrator.
If this works, it means you have not correctly installed the ActiveMQ service to be run under an account that has the administrator privileges.
You need the 64-bit wrapper (by default ActiveMQ comes with 32-bit only).
A few options there:
Download the latest 64-bit wrapper available in the website
Launch the process via jsvc (see this blog post describing the process)
Update to the latest 5.6 ActiveMQ that supports this out of the box (the previous 2 entries were only for releases pre 5.6
You can try the below items:
Check if the port is blocked by other services; you can change the port from ActiveMQ.xml if it is.
Check that JAVA_HOME is available and is accessible to the user
You can delete the Kaha DB folder (activeMQ\data\kahadb) and install again
This worked for me

Resources