NVIDIA’s GPU Technology Conference (GTC) is right around the corner, promising to be one the premier conferences of 2021. Starting April 12th with the keynote from Jensen Huang, NVIDIA founder and CEO, this year’s digital GTC is pulling out all stops to spotlight the latest and greatest innovations, provide technical deep dives and deliver critical training across a wide range of topics from accelerated computing to deep learning.
Take a sneak peek at some of the technical talks, panel discussions, and training offered at GTC21. The best part? Registration is FREE!
Getting to the Heart of GPU Programming
Evolving from their beginning as fixed function processors for computer graphics, GPUs have become increasingly important for numerous applications across multiple disciplines due in large part to the parallelism and increased performance they provide.
GTC21 has hundreds of sessions for GPU programming, giving attendees the opportunity to gain insights from experts on the latest tools, libraries, and frameworks available to the developer and research community. From learning how GPU computing works to understanding the latest developments in specific languages and programming models, there is something for every skill level.
OpenACC at GTC
The OpenACC specification continues to provide researchers and computational scientists with an easy onramp for the porting and acceleration of their scientific applications using GPUs. At this year’s GTC, OpenACC is represented by a variety of talks, a panel discussion, “Connect with Experts” Q&A session and tutorial. Learn how OpenACC provided the optimal combination of ease of porting and performance for atomic-scale materials modeling application VASP or contributed to the aerodynamic flow control simulations with many GPUs on the Summit Supercomputer. Also, join the panel discussion led by Jeff Larkin, OpenACC Technical Committee Chai, as experts debate the ultimate accelerated computing programming approach for the future parallel programmer:
- Present and Future of Accelerated Computing Programming Approaches
With endless choices of programming environments in the parallel computing universe at a time of exascale computers, which programming model should you choose? Are base languages, like C++ and Fortran, a solid choice for today's codes? Will OpenACC and OpenMP directives stay around long enough to warrant investing time and effort in them now for application acceleration on GPUs? Or should a researcher go back to the low-level programming models, like CUDA, to extract maximum performance of the code?
Additionally, there are two sessions that feature work from researchers that participated in previous GPU Hackathons:
- Unraveling the Universe with Petascale Graph Networks
Presented by Christina Kreisch and Miles Cranmer from Princeton University, this session focuses on using graph networks to find interpretable representations of physical laws in the universe with petabytes of data. They will present an overview of how they constructed and trained their graph network in an interpretable way with symbolic regression in addition to providing new constraints on cosmological parameters. Leveraging NVIDIA’s toolkits and improving GPU utilization, they achieved over 8,000x speed-up in pre-processing.
- Scaling Graph Generative Models for Fast Detector Simulations in High-Energy Physics
Presented by Ali Hariri from American University of Beirut, this session introduces a graph neural network-based autoencoder model that provides effective reconstruction of detector simulation for Large Hadron Collider (LHC) collisions. He will describe the graph-neural network model developed, discuss the pre-processing of raw data into graphs and describe the spatial convolution and graph pooling layers, elaborating how node embedding and graph down-sampling is performed for the encoding and decoding stage. Finally, he will provide a detailed review of model training, porting to multi-GPU platforms, and multi-GPU scaling of the model.
See the full list of OpenACC and Hackathon sessions at GTC21.
Getting Your Arms Around Arm
From IoT sensors to supercomputers to autonomous vehicles, Arm processor IP offers a wide range of cores to address performance, power and cost requirements for a wide ecosystem of partners. To date, Arm partners have shipped more than 180 billion Arm-based chips. NVIDIA and Arm are working together to open new opportunities for partners, users and developers: take a look at the top Arm sessions offered at GTC21.
Don’t miss your chance to learn from top researchers, programming experts and industry pundits. Register today for free and start building your GTC21 schedule.