Is there any way to lock a windows workstation, using a kernel level driver?
I need to do it in kernel, in order to achieve most possible protection and avoid bypassing.
ThnX
I understand that you think that doing this in the kernel is better, but really, your best bet is to simply call the function designed to do this from usermode: LockWorkStation.
What attack vector do you envision that running in the kernel will protect you against?
Related
I just updated my server and I am running a web service using Deeplearning4j. I wanted to check out different configurations for performance to decide where to put future efforts. From what know.
Native or CPU which is my current default
CUDA or GPU usage. Set environment in code to use CUDA
GTX GPU specific.
For #2 I am thinking I need to remove ND4J native jars from classpath in addition setting the environment in my Java code.
For #3 I am not sure if that applies for when I use my DNN model but only for training?
Many Thanks for feedback!
After running some tests it seems there is no difference. But after my server update going from a 3rd generation i7 to a 10th generation and updating my memory from 800MT/sec to 2400MT/sec seemed to have the biggest gain overall. I had no change in PCIe I am still at 3.0 and the GPU is still a GTX 1080. I have not tried overclocking the memory or GTX 1080 yet. That would indicate maybe the critical path is the memory or GTX. I suspect the memory is where the biggest gain is for my users.
Like virtualization using VMs, we know that new circuits add extensions to handle that kind of virtualization. I am wondering if with containers such extensions exist, or maybe is there any study for a potential hardware support for containerization even if it happens in the OS level by default.
thank you.
Containers are not using hardware virtualization. Instead, they leverage built-in Linux functionality: namespaces and control groups.
Namespaces provided by the Linux kernel to allow separation of process ID space (PID),
and control groups allow accurate resource control to limit CPU or memory usage for each container.
So there is little need for special hardware support to run containers in 64-bit Linux.
(Windows and Mac are a different story)
I'm going to start the development of an application on a Zynq board. My task is basically to port an existing application running on a Microblaze on the dual core ARM.
What I'm wondering about is which O.S. to use on the new system, because I have no experience at all in this field.
It seems to me that there are four main approaches:
1) Petalinux (use both cores)
2) Petalinux+FreeRTOS (use both cores)
3) FreeRTOS (use only a core)
4) Baremetal (use only a core)
What my application has to do is to move a big amount of data between Ethernet and multiple custom links, so it has to serve a lot of interrupts and command a lot of DMA operations.
How much is the overhead introduced by Petalinux in the interrupt service with respect to baremetal or FreeRTOS? Do you think that, for this kind of work, is faster a single core application running without any OS or, for example, a Petalinux application that has the overhead of the OS (and of the synchronization mechanisms like semaphores or mutex)?
I know the question is not precise and quite vague, but having no experience in the field I strongly need some initial hints.
Thank you.
As you say, this is too vague to give a considered answer because it really depends on your application (when does it not). If you need all the 'stuff' that is available for Linux and boot time is not an issue then go with that. If you need actual real time behaviour, fast boot time, simplicity, and don't need anything Linux specific, then FreeRTOS might be your best choice. There is a Zynq FreeRTOS TCP project that uses the BSD style sockets interface (like Linux) here: http://www.freertos.org/FreeRTOS-Plus/FreeRTOS_Plus_TCP/TCPIP_FAT_Examples_Xilinx_Zynq.html
Usually the performance should not differ alot.
If you compile your linux with a well optimizing compiler there is a good chance to be faster compared to bare metal.
But if you need hard real time linux is not suitable for you.
There is a good whitepaper from Altera but should fit for Xilinx too:
whitepaper on real time jitter
I want to build a system over seL4 and I do not want to write the drivers from scratch. I know that L4linux managaged to raise an entire linux kernel, drivers included, over fiasco.OC.
Ideally I want a driver wrapper that would allow me to run linux drivers as standalone tasks over sel4.
I am willing to code much. but I want to avoid reading hardware spec sheets and rewriting drivers.
I last looked at L4 in depth many years ago.
Based on my understanding the answer to your question should be in general a no. The reasons for this are mainly in two aspects: For one is because a fully bloated linux driver needs to take care of too many aspects to integrate into the kernel subsystems. The another reason is the two kernels are different.
If the specific driver you are looking at does not heavily integrate into the kernel subsystems, it may be not a huge task for you to develop a wrapper.
Is there an other way of monitoring the system threshold values (RAM, CPU) instead of SNMP?
There should be as simple way as client-server interaction since defining TRAP in SNMP is not easy at the beginning?
Thanks in advance.
Well if you're quering a Windows machine you can use WMI. It is really powerfull and there is also a Linux porting if you are quering from a Linux machine. For example if you want to monitor RAM usage you can execute the following query:
select FreePhysicalMemory from Win32_OperatingSystem
Now if you want more information I need to know your platform system and what language you will use.