I am trying to create a thinpool for my docker container. I am following their guide here
https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-direct-lvm-mode-for-production
It says "It assumes that you have a spare block device at /dev/xvdf with enough free space to complete the task."
I don't have a device at /dev/xvdf. How can I create one?
Basically, a block device can be one of the following:
Hard drive
Flash drive
DVD drive
Blu-ray drive
etc.
In this case, you need to mount a second hard drive in your server.
Or if you are using Vagrant/Virtualbox for development, you can add new hard disk in Oracle VM VirtualBox Manager
Open Settings page of the box you are working with
Select Storage from left menu
Click to add new hard disk
Click Add Hard Disk
Click Create new disk
Select VMDK (Virtual Machine Disk)
Select Dynamically allocated
Give the disk a name and specify the size of the disk
Finally click create
Restart the box
You now have a block device to work with, to list the available block device run lsblk
For my case, I have added two hard disks and they are labeled with /dev/sdb & /dev/sdc
You can use the hard disk to create a physical volume.
Related
I'm building a custom Linux distribution for an embedded target. I want to be able to build exactly the same image again 20 years in the future and I want to build without internet access. Thus, I need to store all the downloaded sourcecode.
How can I achieve that? Is it enough to build on some host with internet access and then store the resulting build/downloads directory?
I guess I also need to store the build host (e.g. store or be able to re-create the docker container). Maybe the most practical way is to store the contents of the docker image after building? Perhaps just excluding e.g. everything under build/tmp and build/cache?
I accidentally ran rm -rf on my terminal on my mac and lost everything, including an app project I was working on. Is it possible to somehow transfer the app project that I had deployed on my iPhone back to my computer in any way? Just so I can see what I can salvage?
You can't get the Xcode app project back from the app that was installed on your phone.
If you're using source control (such as with git), you may be able to go back to a prior version. To check, go to the command-line and do
% git log
See if it shows any history.
Otherwise, if you stored your project in your Documents or Desktop folders and have those folders synchronized with iCloud, you might be able to get it back by going to iCloud.com and checking if you can download it. However, because these folders are synchronized, doing this right away is important, because it will synchronize the deletion. This is why a cloud-sync'd folder isn't considered a backup.
If you have Time Machine configured on your Mac, you can check if there is a version available there. Open Time Machine from Launchpad -> Other -> Time Machine, or click the Time Machine icon in the upper right corner of your Mac screen (looks like a clock with an arrow/circle around it), then click "Enter Time Machine". If you're able to do this, you can go to the folder and browse previous versions by time.
Lastly, check any other backup solution you may have, such as Backblaze, for example. Backblaze is a cloud backup solution. It also has an icon up top on your Mac desktop -- it looks like a flame.
If none of these work, you're out of luck (sorry!). Learn about git, and use Bitbucket or Github with it. Set up time machine. Also set up a cloud backup, like Backblaze. I use all of those. Hopefully one of these works for you!
To debug remotely with Delphi, now that the PAServer is normally used, one usually follows the following preliminary steps, as outlined on the online docwiki:
Create a remote profile inside your ide, in my case I'm calling it win7vm, this contains the IP address of the remote system, and a few options, and the PAServer connection password, but as far as I can see, it doesn't contain any context information like "I want to use the following remote folders".
The remote folders that are usually used to contain my target executable are usually a subdirectory underneath the scratch directory. If I configure the main scratch directory to be c:\scratch, and my remote profile was named win7vm and my project is name project1, then the IDE and PA Server are going to compile and deliver my executable to c:\scratch\win7vm\project1 and run it from there.
What I actually want is to have my executable be delivered to and run right in #c:\scratch#. I have a gigabyte of supporting files that all need to be in the same directory as my main executable and I don't want to have 8 copies of these supporting files. So, can this be done? Can I make PA Server NOT create a sub-folder with the name of my project and another for the name of my session? If so, is this done inside the paserver.config file, or on my client side, or somewhere else?
Update: I tried to change the Remote Path in the Project -> Deployment, but it does not work to use a relative path here. For example, change it to ..\..\ and the working directory in Project Options to ..\..\, but it still only delivers my executable to the folder c:\scratch\win7vm\project1 instead of directly into c:\scratch\
Update 2: It appears you can't do what I want, which is have the scratch directory be the main folder, and NOT create any sub-folders per-profile and per-project, and that if you don't like working in a scratch sub-folder, you should just set Absolute Remote Paths in Deployment and Absolute working folder in your project's Run options, and you should turn off restricted mode, however this means that the entire VM or machine you are remotely debugging can be manipulated by the PAServer, which is also hardly ideal. I believe that having a fixed root scratch folder would be a safer and more flexible way of working, so I'm leaving this question open, hoping to find an effective way to achieve a safe but flexible remote debugging technique.
I am writing a program with Delphi that monitors a shared folder on my computer, where other people on the network have read-write access to it, I can log changes and info of changed files, but how can I find out which computer made the changes?
Is it possible to find the computer name or IP?
note: using ReadDirectoryChangesW.
If you poll files opened remotely on your pc and match them with monitored directory you might catch which computer opened the file. In order to not miss any short remote access to your files, hooking might be a better idea. An example utility that can show remotely opened files is psfile from pstools.
I have a Mac and iOS application that is sharing data using iCloud, via a single "shoebox" file.
Most of the time, changes are properly synchronized in an efficient and prompt manner. However, every once in a while (particularly, right now) changes that I make on one device simply sit there.
I have made changes on my Mac to the shared data file, and the data has been saved to disk. However, I don't know whether it's the system's failure to upload the data to iCloud, or the iOS device's failure to check for the new data, but I'm twiddling my thumbs.
The 5KB file below is the one that should be changing. No matter how many changes I make in the Mac app, every once in a while during testing, iCloud will just stop syncing changes. If I walk away for 20 minutes and come back, it might start up again.
Further: If I run the Mac application in Xcode and keep an eye on the same file, even though I make changes to the file and can confirm that the file on disk (in the Finder/Terminal) is actually changing, the iCloud panel in Xcode does not pick up these changes very quickly either:
Note the same 5KB file has changed on my local filesystem on my Mac (at 9:01), but iCloud just isn't picking it up. There are actual content changes in this file, not just a modification date change.
So, I would like to find a way to either:
Trigger the sync programmatically, or even using Xcode. I know that the iCloud sync can be triggered using the simulator, but this only works when testing on iOS, and I much prefer to do my testing on the actual hardware anyway. Or,
Determine who (or what) is "at fault" for the data not being shared. I have followed the iCloud documents from the beginning, ensuring that I'm using coordinated writes to save changes, etc. It's just a very intermittent thing where iCloud will doze off, and makes testing very frustrating.
I don't think you can trigger synchronization of selected files, but you can use this tool "brctl" to see what is going on under the hood (or diagnose) (on OSX), to diagnose what it might be about.
commant line command:
$ brctl
for example use it like this
brctl log --wait --shorten
or
brctl diagnose
I realized when you migrate any store to your existing iCloud store, the iCloud store adds those data and forces sync to the version in the iCloud. So I migrate an empty store to my existing iCloud store, and I forced the sync!