I have a Siemens IoT where I installed Node-RED. I'm following a project that is based on receiving data and save them in a db via Node-RED and must be active 24h every day.
I wanted to know if there was an auto-run in case the IoT went off to start everything automatically.
Hope this might be helpful,I am using this in my RPI.
sudo systemctl enable nodered.service
https://nodered.org/docs/hardware/raspberrypi
If pm2 is possible to install, then you can try this too.
sudo npm install -g pm2
pm2 start /usr/bin/node-red -- -v
https://nodered.org/docs/getting-started/running
Related
I've followed the directions here, and everything works well until I restart my computer. After restarting, it seems like the docker daemon loses track of the Google credentials.
$ docker run --log-driver=gcplogs ...
fails with:
docker: Error response from daemon: failed to initialize logging driver: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
ERRO[0000] error waiting for container: context canceled
This is strange to me, because running $ systemctl show --property=Environment docker returns the value in my systemd configuration:
Environment=GOOGLE_APPLICATION_CREDENTIALS=/etc/path/to/application_default_credentials.json
If I $ sudo systemctl restart docker, then docker runs sucessfully and logs are sent to stackdriver. But I want this docker image to run automatically on startup, and restarting docker with sudo gets in the way.
Is there a way to initialize the docker daemon with the necessary environment variables, so gcplogs is ready on boot without restarting docker?
I had two versions of docker installed -- one through adding docker's repo to apt, and one through snap. Running
sudo systemctl list-unit-files| grep docker | grep enabled
showed two installations of docker:
docker.service enabled
snap.docker.dockerd.service enabled
Having two docker installations was causing problems for startup. I removed the snap installation, rebooted, and everything now works.
I think you may try to edit the systemd: Unit dependencies and order, let docker.service start after google-accounts-daemon.service.
You can see all the service in google vm by
sudo systemctl list-unit-files| grep google | grep enabled
And you will see
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-network-daemon.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
On Windows Subsystem for Linux running Ubuntu 16.04, I've installed InfluxDB 1.4.2 according to the Influx documentation. I can't run it as a service or with systemctl, probably because WSL doesn't support that (see GitHub issues 994 and 1579), so neither of these work:
$ sudo service influxdb start
influxdb: unrecognized service
$ sudo systemctl start influxdb
Failed to connect to bus: No such file or directory
If I run $ sudo influxd, Influx starts, but then crashes with the message
run: open server: open tsdb store: cannot allocate memory
How do I fix the "cannot allocate memory" error?
On Win10 Spring 2018 Update, I ran the following:
sudo apt install influxdb influxdb-client -y
Installed fine.
As per the docs …
… started the service using:
sudo service influxdb start
Started fine:
Let's connect, examine and create a database:
Please let me know if I've done anything wrong here to repro, otherwise, looks like this issue has been resolved.
I just had this problem when installing in WSL but systemd was installed. When installing the influxdb package it registered a systemd unit so I was unable to start it using init.d. I solved this using this guide. Instead of the dead link to the init.sh script i searched for an older version and found this.
Steps to get InfluxDB working in WSL (at least when systemd is installed):
Install influxdb using sudo apt install influxdb
Copy the content of this file into a new file at location /etc/init.d/influxdb
You can now start influxdb using sudo service influxdb start.
For me it showed an error message while starting but it still started correctly.
I am using the latest Ubuntu build 15.10 and have gone through the install of ElasticSearch here: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-service.html
However, even after executing the command that adds the service to the start up process.
sudo update-rc.d elasticsearch defaults 95 10
sudo /etc/init.d/elasticsearch start
Rebooting the computer, then going to localhost:9200 gives a 404.
And every single morning I run the sudo /etc/init.d/elasticsearch start, then the sudo update-rc.d elasticsearch defaults 95 10 in hopes that tomorrow will be a different day, to find my machine in the exact same state as yesterday.
On a side note, my machine at work uses the same version of Ubuntu and the steps described above worked on the first try.
If anyone has overcome this issue, your insight would be very appreciated!
Thanks you!
Ubuntu, since version 15.04, is using systemd by default instead of the older upstart for handling services and init scripts. I think you need to initialize elasticsearch differently, as described in the ES docs.
Something like:
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl start elasticsearch.service
I have installed Aegir on my Ubuntu 14.04 (inside a Docker container) following the manual installation guide.
But when I execute sudo /etc/init.d/hosting-queued start, it replies me Starting Aegir queue daemon... ok but nothing happens, the daemon is not launched (I don't have it in the process list).
If I execute sudo /etc/init.d/hosting-queued status, it shows: Aegir queue daemon is not running.
I've checked inside that script and saw that it runs su - aegir -- /usr/local/bin/drush --quiet #hostmaster hosting-queued, so I tried to execute drush #hostmaster hosting-queued as aegir user and this gave me that:
The drush command 'hosting-queued' could not be found. Run `drush cache-clear drush` to clear the commandfile cache if you have installed new extensions. [error]
And even if I run drush cache-clear drush, I still have this message...
Have I missed something ?
I opened an issue on the project.
I've found a workaround which is not explained in the install documentation:
As aegir user, enable hosting_queued module
drush #hostmaster pm-enable -y hosting_queued
As aegir user, launch the service manually:
drush #hostmaster hosting-queued &
I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working