![]() I think the San we have reach some 3000+mb×s. What kind of read/write speeds does a "fast San and dpx" achieve? ![]() And yes, I know that DR has always been a GPU centered system. If only that much of the CPU/GPU utilization occurs is it safe to say that there is room for further improvements in DR? In Edius which has always been fully scalable and, depending on which delivery format I choose, I can have CPU utilization of up to 80% on the desktop system with very fast rendering speeds. Mobile GPU is the new Quadro M5000M with 1536 CUDA cores and 8GB memory. My CPU utilization was around 15-20% and GPU around 20-30%. Mobile has an M.2 PCIeNVMe System drive with 1500/1000 read/write and being written to a Video Drive consisting of 2-M.2 PCIeNVMe drives in RAID 0 with 2500/1500 read/write. I just tried out my Mobile station for rendering (don't render on that machine to delivery format) and I'm getting the same 3fps as with the Desktop. Timeline files are XAVC-I 4K 300mb/s from the Sony FS7 and the render files are Grass Valley HQX 4K for Edius 8. My Desktop station has a System Drive that is SSD with 500/500 read/write and the Video Drive is a 3 Drive RAID 0 with the same read/write. It also depends which format file you handle and which stored disk you have: on a fast San and dpx, you will see the two (or more) titan boosted performances. Perhaps there is room for improvement as far as scalability of DR in utilizing computer resources? It just seems like a software based limitation. When I render on the Delivery Page - I utilize 7% CPU and only 20-30% Titan X-1 and 0-10% Titan X-2. The only people that would know this for sure (probably) are the R & D (for DR) in Singapore.įor example - in my Desktop Video workstation with Dual 10 core Xeons and 2 Titan X's when I playback a 4K Timeline with 2 Nodes (one LUT, the other minor color and low, mid, high tone corrections) I am utilizing 17% of CPU and 40% Titan X-1 (with GUI) and about 10% of Titan X2. It may be that the software is not capable of taking advantage of the higher clock speed beyond a certain point. Just like I thought that having 2 - Titan X's would render (to final delivery format) faster than 1 alone and it didn't pan out (yes, both were set for compute with one handling GUI and compute and the other for compute). I understand - guess the only way to know for sure is to test out (with DR) a Titan X vs the 1080 and see how the software behaves in the real world. The GTX 1080 has a base clock speed of 1607Mhz vs the Titan X's base clock speed of 1000Mhz, so although it has less cores in theory those cores will be running a lot faster It also uses the traditional memory GDDR5X, slower bandwidth than HBM.Īdam Simmons wrote:You also have to take into account the memory speed and the base clock speed of the GPU. The GTX1000 series I believe uses the P104 architecture which doesn't have many fp64 cores (not used by graphics rendering, games or grading software etc., so performance not affected). Remember that this is memory bandwidth between its HBM memory and its SM (streaming multiprocessor) processing units all housed on the graphics card, not between the graphics card and the host motherboard. Also has fp64 (floating point 64-bit precision) cores of 1/2 the number of cores fp32 CUDA cores. Its P100 architecture is using the HBM gen 2 that gives 1TB/s bandwidth. The GTX 1000 series IS using the PASCAL architecture that features the new 16nm process (down from 28nm I believe) giving reduced die size and more performance per watt. You know motherboards that have PCI Express ports with a Bandwidth of 1 TB / S? I seriously think that we must wait for to the "NVIDIA PASCAL" it says "NVIDIA Unveils Pascal GPU 16GB of memory, 1TB / s Bandwidth."
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |