I need to increase the allocated base memory in Virtualbox to boost the performance. I change it just fine with the machine turned off, but as soon as I start the machine, it reverts back to the old memory. This happens if I both try to increase or decrease the allocated memory, it always returns back to the original value(2048) as soon as I start the machine. It is as if the memory is overwritten. Can anyone help?
Ahhhaa..
I dont know if this is a general solution, but in my case, because this machine is using Vagrant the solution to my problem turned out to have been in the VagrantFile containing settings for this machine.
In my VagrantFile I found this:
config.vm.provider "virtualbox" do |vb|
vb.gui = true
vb.memory = "2048"
end
Once the memory line was removed, I was able to change the memory from the settings as desired.
Related
A while back I created an instance of mariadb inside a docker container on a machine running Ubuntu. I've since learned that I'll need to update some settings to keep things running smoothly, but when I created the image, I did not specify any .cnf volumes. How do I update/create a .cnf file for this image? I'm a complete newb when it comes to docker, so please spoon-feed me.
I've tried accessing the file from within the image, but there are no text editors.
The defaults of MariaDB work pretty much out of the box (container) for small instances. You should only need to change setting when problems occur.
If you have spare memory you can increase your innodb_buffer_pool_size.
With the mariadb container, you don't need to edit the .cnf files, you can just add a few options on the command line per the docs (that you should defiantly read).
Recommend using the defaults for a while, and if you encounter problems, include a new question on dba.stackexchange.com that includes show global status output and specifics on the queries that are slow (show create table TBLNAME / explain QUERY).
We are running Docker on a VMware host running Windows 10.
Based on the first comment, I wonder if the issue isn't specifically the read portion, but maybe there is something happening at the end of the load that is preventing this exe from being cached in memory.
I am trying to change the static IP address of USB0 port of BeagleBone Black.
I know this thread was open previously (Changing the static IP of Beagle Bone Black USB0). But no answer was found. So I am reopening now, to see if people have got any solution now.
I also found there is solution by Eric Wong. (http://ewong.me/changing-usb0-ip-address-on-the-beaglebone-black/)
----- But this solution is good for may be old debian images. The latest ones have different file contents of what's mentioned in the solution. And therefore above solution does not work.
Steps I did:
step 1: I tried to change /etc/network/interfaces such that default address is 192.168.8.2 instead of 192.168.7.2 as this:
iface usb0 inet static
address 192.168.8.2
netmask 255.255.255.0
network 192.168.8.0
gateway 192.168.8.1
step2: Then I changed contents of file /etc/udhcpd.conf
-- change "192.168.7.1" to "192.168.8.1" in two places.
step3: reboot
Bingo, I lost my connectivity and now I have to rewrite the Debian image onto Beaglebone black again, as I no longer can access it. So basically neither I am unable to access through 192.168.7.2 or 192.168.8.2
So If anyone knows how to do it, It would be really helpful if you can share your thoughts ?
Coming into this late; my network uses 192.168.6/24 and 192.168.7/24 internally, so the latest BB images didn't work for me at all.
First: there's no real substitute for a real serial connection via the J1 connector; a 3.3v USB serial doodad is cheap, and being able to watch the whole boot (and image flashing!) process from the very start is super helpful. Adafruit sells one that works great with BeagleBone: https://www.adafruit.com/product/954
Anyway, on Debian GNU/Linux 10 (Apr 2020 image), /etc/default/bb-boot contains:
...
USB_CONFIGURATION=enable
#It's assumed usb0 is always enabled, usb1 can be disabled...
USB0_SUBNET=192.168.7
USB0_ADDRESS=192.168.7.2
USB0_NETMASK=255.255.255.0
USB1_ENABLE=enable
USB1_SUBNET=192.168.6
USB1_ADDRESS=192.168.6.2
USB1_NETMASK=255.255.255.0
DNS_NAMESERVER=8.8.8.8
Adjust this to taste and reboot. In my case, I changed USB0 to 192.168.70, and #commented out all the USB1 lines altogether.
Because I have to re-flash a bunch of boards periodically, I burned this into the SD card image itself, which saved me a lot of time later.
I'm trying to add Neo4j 3.0 to my tests for the neo4j gem and I'm having trouble with the server getting killed in a Travis CI container. Pre-3.0 works just fine, but when I use 3.0 it seems to get killed. There seems to be plenty of memory (when I run Neo4j locally it uses 300-400 MB). I get a warning from Neo4j saying:
WARNING: Max 30000 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
That makes me think that it's getting killed because of too many open files. I'm not sure if there's a way to increase the number of files on a Neo4j container, and I have a number of jobs so I don't want to slow things down by running sudo: true. Did Neo4j 3.0 change to require more open files (the documentation doesn't seem to imply that it did)?
EDIT:
My .travis.yml file:
This is how I do it, and it works fine for me for 2.3 and 3.0 including a push to docker hub.
https://github.com/maxdemarzi/neo_travis
https://travis-ci.org/maxdemarzi/neo_travis
I think our memory allocation is messing things up. One thing that is unusual on your (travis's) setup is that there is twice the amount of swap memory compared to RAM, and that the amount of memory reported as available is very large.
Try specifying the amount of memory in your config file. See http://neo4j.com/docs/operations-manual/current/#performance-tuning for more details, but essentially add these to your config.
In neo4j.conf:
dbms.memory.pagecache.size=1G
and in neo4j-wrapper.conf:
dbms.memory.heap.max_size=1000
dbms.memory.heap.initial_size=1000
The memory limits are set quite low to guarantee that Travis doesn't kill the process, and I suspect that the tests don't need much in terms of memory.
I have the following setup:
Code on my local machine (OS X) shared as a Samba share
A Ubuntu VM running within Parallels, mounts the share
Running Rails 2.1 (either via Mongrel, WEBrick or passenger) in development mode, if I make changes to my views they don't update without me having to kick the server. I've tried switching to an NFS share instead but I get the same problem. I would assume it was some sort of Samba cache issue but autotest picks up the changes to files instantly.
Note:
This is not render caching or template caching and config.action_view.cache_template_loading is not defined in the development config.
Checking out the codebase direct to the VM doesn't display the same issue (but I'd prefer not to do this)
Editing the view file direct on the VM does not resolve this issue.
Touching the view file after alterations does cause the changes to appear in the browser.
I also noticed that the clock in the VM was an hour fast, changing that to the correct time made no difference.
I had the exact same problem while developing on andLinux.
My andLinux's clock was about three hours ahead of the host Windows, and setting the correct time (actually, a minute or so behind) has solved the problem.
Actually, setting the correct date & time in the VM does seem to have solved the problem (after I restarted mongrel) -- going to do a little more digging.