Just brought a Razer Mamba 2012. However the mouse is too sensetive under ubuntu condition, I tried many solutions online but only this one I found works.
As you know there is no official driver for razer mouse. So we have to use third party driver. 1. download drivers sudo add-apt-repository ppa:terrz/razerutils sudo apt update sudo apt install python3-razer razer-kernel-modules-dkms razer-daemon razer-doc on the following link https://terrycain.github.io/razer-drivers/ 2. install razergenie https://github.com/z3ntu/RazerGenie 3. Set the DPI to lower the speed of mouse Naturally, HDF5 is only a bunch of numbers in some order and it does not contain anything that required to have geometric meanings. XDMF is a very useful tool combining with HDF5 to provide powerful visualization results. Here is my script for XDMF (a xmf file) together to show a h5 file with Visit. The only tricky thing in writing the xmf, is GRID.
For my 3D h5 file, although the grid is CORECT(contant rectangular), the script I use so far the mesh I use is curvilinear mesh, which is the stupidest(yet the most comprehensive grid) that need a whole 3D scalar field for each directions. However, this is the most powerful one for structured grid. ## The XMF file is down below <?xml version="1.0" ?> <!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []> <Xdmf Version="2.0"> <Domain> <Grid Name="mesh1" GridType="Uniform"> <Topology TopologyType="3DSMesh" NumberOfElements="512 512 512"/> <Geometry GeometryType="X_Y_Z"> <DataItem Dimensions="512 512 512" NumberType="Float" Precision="7" Format="HDF"> grid.h5:/X </DataItem> <DataItem Dimensions="512 512 512" NumberType="Float" Precision="7" Format="HDF"> grid.h5:/Y </DataItem> <DataItem Dimensions="512 512 512" NumberType="Float" Precision="7" Format="HDF"> grid.h5:/Z </DataItem> </Geometry> <Attribute Name="pre" AttributeType="Scalar" Center="Node"> <DataItem Dimensions="512 512 512" NumberType="Float" Precision="7" Format="HDF"> outputhdf000.h5:/pre </DataItem> </Attribute> <Attribute Name="xve" AttributeType="Scalar" Center="Node"> <DataItem Dimensions="512 512 512" NumberType="Float" Precision="7" Format="HDF"> outputhdf000.h5:/xve </DataItem> </Attribute> </Grid> </Domain> </Xdmf> The corresponding HDF5 file is my default turbulent file and a grid file. Be advised the XDMF can read multiple HDF5 file. Chunk in HDF5 can be seen as the re-arrangement of data in the memory. So still there is only one .h5 file but the way data is stored in disk is chunked as sort of "hypercubic" style.
Here is a great article about this. http://geology.beer/2015/02/10/hdf-for-large-arrays/ DEFINITION OF COMPUTING UNITS CPU: The "brains" of the computer. Responsable for all computations, loading of data, etc Core: An individual CPU unit. For instance, a Dual Core CPU can be thought of as having two seperate CPU's in a single package, with their own dedicated registers and cache. [This isn't entirely correct, but its a simlified explanation] Process: A process can be thought of as a program. When you start a program, it kicks of its process, and all memory that is allocated is allocated at the process level. Thread: A thread is a unit of execution within a process. One process can have many threads. For instance, you could have a thread to handle User Input, a thread for program control, a few threads for AI managment, a thread for audio, etc. All these threads exist within a single process. Within most modern OS's, the thread is the smallest unit of execution; the OS schedules threads [usually based on priority], and the CPU spends some time operating on a thread, before swapping in a new one to work on [giving the illusion multiple things can happen at the same time]. On a multiple-CPU or multi-core system, multiple threads could be run at the same time, hence why there is an increasing focus on software parallelization. wHEN WILL THIS BECOMES IMPORTANT? In OpenMP, KMP_AFFINITY controls how the "software" threads are distributed across the whole "physical threads", usually with hyper-threading, you could double the number of "physical threads". For example, when do lscpu
------------------------------------------------------------------- Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 60 Model name: Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz Stepping: 3 CPU MHz: 2492.382 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 4988.57 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 6144K NUMA node0 CPU(s): 0-7 ------------------------------------------------------------------- Here we can see there are 1 socket, with 4 cores on each, and 8 threads in total. However, there are different ways to schedule your threads distributed over the whole CPUs. 2.1. Compact Scheduling Option #1 is often referred to as “compact” scheduling and is depicted in the diagram to the right. It keeps all of your threads running on a single physical processor if possible, and this is what you would want if all of the threads in your application need to repeatedly access different parts of a large array. This is because all of the cores on the same physical processor can access the memory banks associated with (or “owned by”) that processor at the same speed. However, cores cannot access memory stored on memory banks owned by a different processor as quickly; this is phenomenon is called NUMA (non-uniform memory access). If your threads all need to access data stored in the memory owned by one processor, it is often best to put all of your threads on the processor who owns that memory. 2.2. Round-Robin Scheduling Option #2 is called “scatter” or “round-robin” scheduling and is ideal if your threads are largely independent of each other and don’t need to access a lot of memory that other threads need. The benefit to round-robin thread scheduling is that not all threads have to share the same memory channel and cache, effectively doubling the memory bandwidth and cache sizes available to your application. The tradeoff is that memory latency becomes higher as threads have to start accessing memory that might be owned by another processor. Reference: http://www.glennklockwood.com/hpc-howtos/process-affinity.html Two steps:
1. download intel parallel studio xe from the website 2. default installation folder is /opt/intel use find command to search for 'compilervars.sh' then add the environment variable into the system through: source compilervars.sh intel64 problem solved. CMake is an open-source, cross-platform family of tools designed to build, test and package software. CMake is used to control the software compilation process using simple platform and compiler independent configuration files, and generate native makefiles and workspaces that can be used in the compiler environment of your choice.
Note: When Cmake used for compiling a library: In this case, in build folder, the CMakeCache.txt is your old friend: configure. You need to make sure the CMakeCache.txt has the RIGHT configuration you want. Such as which compiler to use? What optimization flags to use? Otherwise you might find out your library is not that efficient.. Q & A: 1) How to check the variable in Cmake? grep Var CMakeCache.txt |
AuthorShaowu Pan Archives
December 2017
Categories
All
|