12.4.6 : Simulation



  • Tuesday, (4:30 PM - 4:55 PM CET) A New Era of AI-Driven Electronic Design Automation on Accelerated Computing [S62846]
    • Paul Cunningham : Senior Vice President and General Manager, Cadence Design Systems, Inc.
    • Slides
    • Cadence : design and simulate electronic and physical system
    • Computer Chips are powering everything
    • Generative AI based execution and optimization
    • Simulte chips befare they are manufactured
    • Simulte fluid in data centers
    • 2.8x to 6.7x on matrix inversion acceleration on GPU (Grace Hopper)
    • Complexity is increasing by there are not enough engineers
    • From about a day to debug a Chip problem to 10 minutes with AI assistant
    • NeMo retiver and Cadence Copilot
    • Without chips there is no software
    • 16 months to produce a Chips and 3 months only to print the chips for real and test it
  • Tuesday, (6:00 PM - 6:50 PM CET) Simulating Advanced Lunar Exploration Robotic Systems [S61635]
    • Lutz Richter : Space Projects Expert, SoftServe
    • Slides
    • tout est dans le titre
  • Wednesday, (1:00 PM - 1:25 PM CET) Accelerating Simulations of Multiscale Chemical Reactors using NVIDIA Modulus [S62060]
    • Slides
    • PINNS : Predict Physics
    • Gaz to Liquid Chain
    • Multiscale / multiphysics modelling of chemical reactors
    • Iterative simulation
    • SPINN : good speed up with close result to the reference
    • Full scale model of the reactor
    • Less than 2% error on prediciton
  • Wednesday, (4:00 PM - 4:25 PM CET) Simulating Solar Eruptions on GPUs using Fortran Standard Parallelism [S61193]
    • Ron Caplan : Computational Scientist, Predictive Science Inc.
    • Slides
    • Multiple level of physical scale and sphysical processes
    • CORHEL CME : read the sun
    • Magnetohydrodynamic (MHD) simulation of the sun's atmosphère
    • 20 min on 4 GPU system (iterate until result corresponds to sun)
    • Finite difference on a logically rectangular non-uniform spherical grid
    • Fortran parallelised with MPI
    • Highly memory Bound
    • do concurent : say to the compiler loop may be parallelised
    • Much simpler than CUDA
    • Standard will stay
    • Also parallelism on CPU
    • No performance drop on GH200
    • No du concurent reduce because gfortran was not compatible
    • Less code on standard fortran
    • Only a single multi GPU node for the simulation
    • Next total solar eclipse : April 8 2024
    • Jacobi and domain decomposition on GPU
  • Wednesday, (5:00 PM - 5:25 PM CET) Spatiotemporal-Graph Neural Networks for Gravitational Wave Discovery in the Big Data Era [S62178]
    • no record
  • Wednesday, (5:00 PM - 5:25 PM CET) A Fully-Differentiable Lattice-Boltzmann Solver for Integrated Machine-Learning Simulation Workflows [S62237]
    • Josef Winter : Researcher, Technical University of Munich
    • Thomas Indinger : TUM
    • Slides
    • Simulation of fluid flow => use classic Lattice-Boltzmann
    • But how about Hybrid approaches
    • ML can gives a better first iteration for classic method
    • Pytorch based
    • ML Advance module updates the incoming flow
    • Test with NVidia Modulus (based also on PyTorch)
    • Slide 7 Backends for all aspects of the simulation
    • Fourier Neural Operator (FNO) : capts the dynamic of partial differential euqlation )> train on low resulution data and use on higher reslution
    • Simulate von Kàrmàn vortex Street
    • Test on NVidia RTX A6000
    • Slide 14 iteration of Hybrid workflow faster to converge
    • Slide 15 from 407s to 204s with hybrid method
    • From FNO to FNO + LBM
    • Small memory overhead of the model size
  • Wednesday, (11:00 PM - 11:25 PM CET) Strategies for the GPU Implementation of the OVERFLOW CFD Code [S61501]
    • Chip Jackson : Research Scientist, NASA Langley Research Center
    • Slides
    • no record