![]() This leads to the mismatch and complication issues: Although the primary optimizations of these works are diverse, we notice that most of them are trying to find a ‘one size fits all’ solution. Many recent works have explored the potential of using GPUs for data-intensive graph processing. Why GSWTICHĪs GPUs provide higher parallelism and memory bandwidth than traditional CPUs, GPUs become a promising hardware to accelerate graph algorithms. Developers can implements their graph applications with high performance in just ~100 lines of code. In addition, GSWITCH provides succinct programming interface which hides all low-level tuning details. The model can be resued by new applications, or be retrained to adapt to new architectures. The fast optimization transition of GSWITCH is based on a machine learning model trained from 600+ real graphs from the network repository. ![]() Specifically, It is a CUDA library targeting the GPU-based graph applications, it supports both vertex-centric or edge-centric abstractions.īy far, GSWITCH can automatically determine the suitable optimization variants in Direction (push, pull), data-structure (Bitmap, Sorted Queue, Unsorted Queue), Load-Balance (TWC, WM, CM, STRICT, 2D-partition), Stepping (Increase, Decrease, Remain), and Kernel Fusion (Standalone, Fused). GSWITCH is a pattern-based algorithmic autotuning system that dynamically switched to the suitable optimization variants with negligible overhead. View the Project on GitHub PAA-NCIC/GSWITCH A pattern-based algorithmic autotuner for graph processing on GPUs.
0 Comments
Leave a Reply. |