I had to force a restart of my linux computer and upon turning back on, nothing related to my Mongodb installation is functioning properly.
My rails app, using Mongoid, is giving this error:
Could not connect to any secondary or primary nodes for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>
on attempting to load a page and a similar error in the rails console.
Everything was running smoothly before and I am not sure how to right this ship.
I generally get this error when the mongo daemon is not running. Try running something like this:
sudo mongod --fork --logpath /var/log/mongodb.log --logappend
The method used to automatically start on system boot will vary depending on your OS. What flavor of Linux do you run?
i don't know it is right or wrong way but it always work for me
rm /data/db/mongod.lock
mongod --dbpath /data/db --repair
mongod --dbpath /data/db
Related
I had a working MySQL install and I tried to migrate it to MariaDB. I also have Docker, which seems to block /usr/sbin/mysqld via AppArmor.
To be clear, I’m not using Docker currently (but I’d like to not have to remove it since I will be using it soon).
The problem I’m having is when I’m trying to load my project, served by good old Apache, in the browser. I’m getting a SQLSTATE[HY000] [2002] No such file or directory error message.
The workaround right now is to force AppArmor to unblock mysqld:
sudo apparmor_parser -v -R /etc/apparmor.d/usr.sbin.mysqld && sudo systemctl restart mariadb
This works, but I have to redo it after every system boot.
aa-status clearly shows who the culprit is:
4 processes are in enforce mode.
...
/usr/sbin/mysqld (2960) docker-default
After running the workaround above, this line disappears from aa-status and everything works perfectly.
How can I permanently disable this "protection" from the docker-default profile?
So I just need to direction on understanding a postgres installation better, because clearly I only know enough to be dangerous.
I had an app that was my project with a postgres DB installed via homebrew
Then I started collaborating on a project with other people
There was some difficulty getting my existing prostgres install to work with the new project so I installed the postgres app with the gui interface to start stop the db.
That new project is finished and I wanted to get back to work on my other project
When I started the first app up, it couldn't find a db. I tried drop the db, and recreating it, but when I run the migrations it says the tables already exist.
What can I do to get around this?
A good approach would be using Docker so everyone have the same environment. This would mean even you, across your machines or your working colleagues or collaborators would have the same environment. You could have many containers running (watching exposing different ports for Postgres, i.e: 5433...) and when a project finishes just get rid of the container.
This approach saves you the overhead of having to maintain multiple databases locally or depending on a running Postgres process (which I sometimes forget to start).
If you need to solve this locally, try starting your postgres service, connecting to your localhost instance and running:
> psql
psql (9.6.2)
Type "help" for help.
> \l
and you should see all of your databases and debug them. Perhaps deleting and creating the conflicting database (if you don't need your local data) could help.
I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.
I am trying to use dokku-alt (https://github.com/dokku-alt/dokku-alt) to provision a VPS for a Rails App (Ruby 2.1.3, Rails 4.1.2), but my app uses a Postgres extension (pg_trgm).
Unfortunately dokku-alt doesn't currently support the admin_console command, as opposed to here: https://github.com/jeffutter/dokku-postgresql-plugin
Does anyone know of a way to get into the postgres console using the root or postgres user given that Docker is being used?
Yeah you can do it like so:
docker ps
That should give you a list of containers and their ID's, find the one that is running your postgres instance (could be one for all apps, might be one for each other app)
docker run <container_name> psql
If you're using even close the latest version of dokku-alt, there is a admin console command.
I recently ran into a problem where I had to grant super user access to one of our apps.
What I did was
dokku postgresql:console:admin <<EOF
ALTER USER dbusername WITH SUPERUSER;
EOF
Running dokku postgresql:console:admin should give you direct access into the main psql console.
I configured my Rails 3 production app about 6 months ago on Ubuntu running nginx/passenger, using git and Capistrano for deployment.
Fast forward to last week - The data center I was using (DigitalOcean NYC) actually had a complete power failure (and the battery backup didn't work) - resulting in my server shutting completely down.
I did not set passenger or mysql to start on reboot, so when the hardware server restarted, my app was still down.
I really did not know much about what I was doing at the time when I launched it (since it was my first production server that I have worked with), and I followed a guide to get it up and running.
When I attempted to get the app running again, I managed to start mysqld no problem - but for the life of me couldn't remember how to get nginx/passenger running again.
Since time was of the essence (my client needed the app up and running ASAP), I ended up getting the app back up and running by navigating to my app directory (/current) and using the command:
passenger start -p 80 -e production
This did the trick but actually started Passenger Standalone. It seems to work fine (it is not a big or complicated app at all, maybe a few users at a time). I can navigate back to my directory and start and stop it using the above command (and passenger stop -p 80).
However, now my capistrano deploy (cap deploy) no longer restarts the server on a deploy (it is trying to run touch tmp/restart.txt) - which even if I try to run manually, does nothing since the server is running Passenger Standalone.
I can't remember how I got the server up and running in the first place because it was so long ago. I'm not using RVM - just the version of Ruby running directly on the server.
Does anyone know the correct command to start nginx/passenger (not standalone) on Ubuntu?
And even a step further - how I can get mysqld and nginx/passenger to automatically load on a hard server restart?
Capistrano does not restart the server because it actually creates a new app directory (/u/apps/.../releases/xxx), while Passenger Standalone is still running in the old app directory (/u/apps/.../releases/yyy). Therefore touching restart.txt doesn't work. Instead, you have to restart Passenger Standalone like this:
cd /path-to-previous-release && passenger stop -p 80
cd /path-to-current-release && passenger start -p 80 -e production
You mentioned you want to start nginx/passsenger. I assume that you mean the Nginx mode. Here's what you need to do:
Install Phusion Passenger using the official Passenger APT repository.
There is no step 2. If you did step 1, then the Ubuntu package will automatically configure Nginx to start at system boot, which will automatically start Passenger as well.
I don't understand why you ask how you can get mysqld to automatically start on a hard server restart. Mysqld is always started during system boot. You don't have to do anything.