Laptop to desktop memory RAM adapter reliability - memory

I have recently came across an adaptor that would allow me to use laptop memory on my desktop. See item below:
http://www.amazon.co.uk/Laptop-Desktop-Adapter-Connector-Converter/dp/B009N7XX4Q/ref=sr_1_1?ie=UTF8&qid=1382361582&sr=8-1&keywords=Laptop+to+desktop+memory
Both the desktop and the laptop use DDR3.
My question is, are this adapters reliable?
I have 8 GB available and I was wondering if they could be put to use in my gaming rig.
The desktop is an i7 machine generally used for gaming and some basic development.

The adapter should be reliable based on how it looks. There is not much to it only that it extends the "mini" RAM block to a bigger one. You can make the analog with A-B USB cables.
What you should also consider is if both RAM devices use the same frequency and possible heat issues as you will have to cool down the laptop memory more that if it was desktop size. This is because a lot of current goes trough smaller size compared to the desktop based RAM blocks. Then again you have the extension board to handle and disperse some of the heat so if you are not having some really extensive RAM operations you should be fine but you should check what is the working frequency on both of them. For example if the laptop one is faster than the maximum one your computer can support then you won't get that faster performance and the RAM block will work with the frequency of the system bus but if it is slower then the system bus will work on that frequency.

Use standard things on this module as reference to calculate the width. Measure it on image and scale to a reference item and check on your system. Use contacts or the lock in grooves to do the scaling since they are of standard dimensions on all modules. Or the module length...

Related

Comparison between USB and Mini PCIe Interfaces

I'm deciding between the MiniPCIe and USB accelerators for a home Linux CCTV project. The host has both USB3 and a MiniPCIe socket. The host's physical environment will range from an ambient 20C up to a potential 35C (during the summer).
I'm struggling to determine the pros and cons for each. I have gotten this far, although many are guesses:
USB:
Supports Windows and MacOS as well as Linux
Appears to have greater mindshare/use/community support on the Internet
External so can be placed to optimise heat dissipation
Heatsink
Two manual performance modes, highest requires ambient temp of max 25C
Can use up to 4.5W (900mA # 5V)
Mini PCie:
Cheaper (25%)
Lower power consumption (1.4W for 416 fps)
Automatic thermal throttling via driver
Relies on host system for active cooling
Will maintain max operation at 85C
There's probably many I've missed. In particular I can't determine if there's any limitations on throughput/capacity using USB vs PCIe. If there is no difference, then I suspect the USB form factor is the better option, if only for the mindshare, although the power usage/heat generated may be a concern.
To whittle this down to an actual question: in what cases would the Mini PCIe interace be a preferred option to the USB one?
If you are looking for a plug&play solution, then I definitely suggest the USB Accelerator. Overall, as long as you have the system requirements then it'll always works (maybe with some modifications to the standard linux configs like adding your user to the plugdev group, ...). Then the software for the CCTV is all up to you :)
PCIes sometimes need extra works like adding extra kernel arguments and modules to keep the pcie modules happy. If you are looking to launch a huge product where volumes are expected, then it is worth investigating it since it's cheaper and more compact. However, the power usage is a must for consideration as the USB Accelerator could uses up to 900mA, so that could play a factor.
May I know what host are you trying to attach the accelerators to?

How to increase google cloud AI notebooks GPU memory

I have a pretty big model I'm trying to run (30 GB of ram minimum) but every time I start a new instance, I can adjust the CPU ram but not the GPU. Is there a way on Google's AI notebook service to increase the ram for a GPU?
Thanks for the help.
In short: you can't. You might consider switching to Colab Pro that features e.g. better GPU:
With Colab Pro you get priority access to our fastest GPUs. For
example, you may get access to T4 and P100 GPUs at times when
non-subscribers get K80s. You also get priority access to TPUs. There
are still usage limits in Colab Pro, though, and the types of GPUs and
TPUs available in Colab Pro may vary over time.
In the free version of Colab there is very limited access to faster
GPUs, and usage limits are much lower than they are in Colab Pro.
That being said, don't count on getting best-in-class GPU just for yourself for ~10 USD / month. If you need high-memory dedicated GPU, you will likely have to resort to using a dedicated service. You should easily find services with 24 GB cards for less than 1 USD / hour.
Yes, you can create a personalized AI Notebook and also edit its hardware after the creation of it. Please take special care if you are not hitting the quota limit for GPU if you still are not able to change these settings.

Why would one chose many smaller machine types instead of fewer big machine types?

In a clustering high-performance computing framework such as Google Cloud Dataflow (or for that matter even Apache Spark or Kubernetes clusters etc), I would think that it's far more performant to have fewer really BIG machine types rather than many small machine types, right? As in, it's more performant to have 10 n1-highcpu-96 rather than say 120 n1-highcpu-8 machine types, because
the cpus can use shared memory, which is way way faster than network communications
if a single thread needs access to lots of memory for a single threaded operation (eg sort), it has access to that greater memory in a BIG machine rather than a smaller one
And since the price is the same (eg 10 n1-highcpu-96 costs the same as 120 n1-highcpu-8 machine types), why would anyone opt for the smaller machine types?
As well, I have a hunch that for the n1-highcpu-96 machine type, we'd occupy the whole host, so we don't need to worry about competing demands on the host by another VM from another Google cloud customer (eg contention in the CPU caches
or motherboard bandwidth etc.), right?
Finally, although I don't think the google compute VMs correctly report the "true" CPU topology of the host system, if we do chose the n1-highcpu-96 machine type, the reported CPU topology may be a touch closer to the "truth" because presumably the VM is using up the whole host, so the reported CPU topology is a little closer to the truth, so any programs (eg the "NUMA" aware option in Java?) running on that VM that may attempt to take advantage of the topology has a better chance of making the "right decisions".
It will depend on many factors if you want to choose many instances with smaller machine type or a few instances with big machine types.
The VMs sizes differ not only in number of cores and RAM, but also on network I/O performance.
Instances with small machine types have are limited in CPU and I/O power and are inadequate for heavy workloads.
Also, if you are planning to grow and scale it is better to design and develop your application in several instances. Having small VMs gives you a better chance of having them distributed across physical servers in the datacenter that have the best resource situation at the time the machines are provisioned.
Having a small number of instances helps to isolate fault domains. If one of your small nodes crashes, that only affects a small number of processes. If a large node crashes, multiple processes go down.
It also depends on the application you are running on your cluster and the workload.I would also recommend going through this link to see the sizing recommendation for an instance.

Is it worth to install win ce cf 3.5 application on storage card?

Since our application grows, we need more space on our Windows CE devices.
If I install CF app in RAM on win ce device this app vanished after cold restart.
I have used the simplest choice install on flash card. As I mentioned running applications from the sd card is slow and there are some heavy issues with demand-paging if you run the apps from persistent paths. Isn't it? Is it worth to install it there? Will we get performance problems?
Should I use another solution - install after cold restart/new start on RAM from flash disk (if it possible)? Where can/should I store settings/log files? On flash/sd card?
There's no "one size fits all" answer for this.
If you move the app from memory to storage you'll gain RAM. Maybe that boost in RAM will give the EE more heap space and thereby prevent GC thrashing. That would give you better perceived performance. But maybe it won't and it will just increase demand-paging for your app and hurt performance. Maybe you'll get a little of both and it's a wash.
How would you handle persistence to RAM? That depends on what your device supports for auto-running apps.
Where should you store settings and logs? Again, that depends on the device, the storage, the size, the frequency of access and loads of other things.
Basically the answer for all of these is only going to be found by you testing your actual app on your actual hardware. Try the difference scenarios and collect metrics to see which performs better. That's the only "correct" answer.

DataSet size best practices - are there any general rules?

I'm working on a desktop application that will produce several in-memory datasets as an intermediary before being committed to a database.
Obviously I'm going to try to keep the size of these to a minimum, but are there any guidelines on thresholds I shouldn't cross for good functionality on an 'average' machine?
Thanks for any help.
There is no "average" machine. There is a wide range of still-in-use computers, including those that run DOS/Win3.1/Win9x and have less than 64MB of installed RAM.
If you don't set any minimum hardware requirements for your application, at least consider the oldest OS you're planning to support, and use the official minimum hardware requirements of that OS to gain a lower-bound assesment.
Generally, if your application is going to consume a considerable amount of RAM, you may want to let the user configure the upper bounds of the application's memory management mechanism.
That said, if you decide to dynamically manage the upper bounds based on realtime data, there are quite a few things you can do.
If you're developing a windows application, you can use WMI to get the system's total memory amount, and base your limitations on that value (say, use up to 5% of the total memory).
In .NET, if your data structures are complex and you find it hard to assess the amount of memory you consume, you can query the Garbage Collector for the amount of allocated memory using GC.GetTotalMemory(false), or use a System.Diagnostics.Process object.

Resources