Exit HPCwire Readers' Choice Awards 2024 Welcome! Below you will find the official 2024 Readers' Choice Award ballot. While there are several categories, the entire voting process should take only a few minutes of your time.All information is kept confidential and is not shared with any third parties.Polling for this ballot closes at 11:59 PM PDT on October 4, 2024, followed by winner notifications. Be sure to subscribe to our weekly newsletter and follow @HPCwire on X for information on the final presentation of these highly anticipated and prestigious awards.Thank you to all of our HPCwire readers for your continued support of our community! For any questions related to the Readers' Choice Awards, or to report any technical difficulties with voting, please contact danica@taborcommunications.com.*Please use Chrome to access survey. If you're experiencing technical issues, please clear your browsing history, cookies and cache.* Question Title * 1. Best Use of HPC in Life Sciences University College London researchers use HPE supercomputers with Nvidia GPUs to develop generative AI models of the human brain. This approach moves beyond simple statistical models to create comprehensive brain models, potentially enabling personalized neurological care and transforming brain diagnostics. Researchers at NCSA applied geospatial mapping techniques to analyze cancer samples, overcoming the limitations of small sample sizes. Using HPC resources like Delta, they mapped tumors like cities, enabling reliable molecular analysis. This novel methodology was published, potentially benefiting other healthcare research with limited samples. University of California researchers used Expanse at the San Diego Supercomputer Center, Bridges, and Bridges-2 at the Pittsburgh Supercomputing Center to map the genome of "Gwen," a Hass avocado descendant. Their analysis of 34 avocado accessions revealed three independent domestication events for Lowland, Mexican, and Guatemalan groups. They also found Hass to be of Guatemalan origin, contrary to previous beliefs and identified genomic regions potentially contributing to hetero-dichogamy. Researchers at KAUST developed HPC-GVCW, an open-source parallel implementation for processing 20,000 rice genomes on the Shaheen-III HPE Cray EX supercomputer. This breakthrough accelerates the discovery of genetic diversity in Asian rice, supporting global food sustainability efforts and paving the way for creating the world's first "digital gene bank" for a major food crop. Researchers at UC Riverside used the Anton 2 supercomputer at PSC to reveal the mechanism behind CRISPR-Cas12a's DNA cutting pause. This insight into the protein's unexpected movement could enable fine-tuning of Cas12a for improved genetic therapies and diagnostics. The study leveraged Anton's specialized molecular dynamics capabilities to advance CRISPR applications in medicine and biotechnology. Nvidia's BioNeMo platform and HPC systems accelerate drug discovery by enabling AI-powered analysis of massive biological datasets. These tools allow pharmaceutical companies to rapidly generate and optimize drug candidates, predict protein structures, and deploy large biomolecular models, potentially reducing development timelines and costs for new medicines. Researchers at UC San Diego used SDSC's Expanse supercomputer to analyze placental RNA data from preeclampsia patients. They identified genes like IL3RA associated with cardiovascular complications in preeclampsia, which is more common in women of African ancestry. The team continues using ACCESS resources to study immune-related genes linked to severe preeclampsia. The University of Birmingham's BlueBEAR cluster (Lenovo, with Intel CPUs and IBM Spectrum Scale Storage) used Nanopore sequencing on plasma samples from pediatric cancer patients to accurately determine cancer type and recurrence without invasive biopsies. High-performance computing resources enabled efficient processing of multi-petabyte datasets, potentially revolutionizing cancer detection and monitoring in both children and adults. The Vdura Data Platform enhances life sciences research by providing scalable, secure data management, enabling faster analysis of large datasets from techniques like NGS and cryo-EM. It offers seamless integration, advanced data protection, and intelligent discovery, significantly improving bandwidth and data transfer rates for institutions like the University of Wollongong. Scientists created the world's first nanoscale electromotor using DNA origami, simulated on supercomputers at TACC, SDSC, and Purdue. The DNA turbine, powered by hydrodynamic flow in a nanopore, could lead to molecular factories or medical probes for bloodstream molecules. Dr. Patrick Boyle's University of Washington Cardiac Systems Simulation lab uses the HYAK supercomputer to run computational models studying heart disease, particularly atrial fibrillation. Their research employs physiological simulations, machine learning, and AI to investigate causes and treatments, with studies consuming up to 4 million CPU hours. Question Title * 2. Best Use of HPC in Physical Sciences Nvidia's Holoscan platform enables real-time AI processing of massive radio telescope data streams, revolutionizing the search for fast radio bursts and other cosmic phenomena. Deployed at the Allen Telescope Array, it achieves significantly faster and more accurate detection than previous methods, potentially advancing our understanding of the universe through enhanced data processing capabilities. A collaboration between Argonne National Laboratory, UIC, UIUC, and University of Chicago created NanoCarbon. NanoCarbon combines LAMMPS with ReaxFF force fields to simulate complex atomic systems, providing insights into detonation nanodiamonds. Using HPC and GPU acceleration, it captured multi-layer graphitization growth on nanodiamond surfaces, validating experimental findings and opening new avenues for exploring applications in energy storage and nanotechnology. Researchers at ORNL used the HPE Frontier supercomputer to simulate a material harder and tougher than diamond. This exascale system enabled quantum-accurate simulations of a billion atoms using machine learning, allowing the synthesis of the world's hardest known substance. The Destination Earth program uses HPE supercomputing to create high-resolution climate simulations. This digital twin project models Earth systems to anticipate disasters and provide insights for energy, food, and health sectors, enabling informed decision-making and proactive climate change mitigation across governments, businesses, and communities. TACC's Frontera supercomputer enabled the development of PRIYA, the largest suite of hydrodynamic simulations of large-scale cosmic structure. PRIYA extends ASTRID simulations by incorporating galaxy and black hole formation rules while altering initial conditions, consuming over 600,000 node hours to simulate 29 billion particles in a 3.91 million light-year box. Predictive Science Inc. used a continuously running MHD model to predict the 2024 solar eclipse corona in near real-time. This data-assimilative approach incorporated the latest solar surface magnetic field measurements, enabling more accurate forecasts of the dynamic solar atmosphere compared to static models. Penn State researchers used PSC’s Bridges-2 and machines at the U.S. Department of Energy to simulate neutron star mergers, finding that higher specific heat leads to colder, denser mergers. These effects should be detectable in gravitational waves by next-generation observatories, potentially providing insights into nuclear physics and condensed matter behavior. Researchers used the San Diego Supercomputer Center's Expanse supercomputer to study carbon dioxide electrochemical reduction (CO2RR), which converts CO2 into fuel using electricity. Their calculations provided insights into mechanisms and catalyst design for CO2RR, advancing efforts to create clean energy with net-zero carbon impact. Scientists at ANL, UIUC, NCSA, and UChicago developed a physics-informed transformer model to predict gravitational wave evolution for spinning binary black hole mergers, including higher-order modes. This AI approach dramatically reduces simulation time from days to seconds, handling terabyte-scale datasets with high accuracy. The model could enable real-time, high-precision simulations and advance understanding of complex astrophysical systems. The Astaroth team developed a high-performance GPU solver for turbulent astrophysical simulations on LUMI's HPE Cray/AMD hardware. Using C++, CUDA/HIP, and MPI, they achieved significant speedups, reducing simulation times from months to days and enabling more realistic modeling of solar magnetic activity. Cerebras and researchers at Sandia National Laboratories, Lawrence Livermore, Los Alamos National Laboratory and the National Nuclear Security Administration used the WSE-2 chip to perform molecular dynamics simulations 179x faster than the Frontier supercomputer. By mapping individual atoms to WSE-2 cores in a 2D grid, they simulated 800,000 atoms for milliseconds, a major advance over previous microsecond-scale simulations. This breakthrough enables new materials science research. Question Title * 3. Best HPC Response to Societal Plight Institut Pasteur is using AWS to index the DNA of all living organisms and identify over 1 million viruses. This project, utilizing AWS Graviton, Batch, and S3, aims to improve pandemic preparedness, discover new therapeutics, and create a DNA search engine. Nvidia Earth-2 is an open, full-stack platform that accelerates climate and weather predictions using AI-augmented, high-resolution simulations. It combines physical simulations, machine learning models, and visualization tools to provide unprecedented speed and scale in global atmospheric modeling. This platform is accessible to users, enabling faster and more detailed climate and weather forecasting The West Midlands Combined Authority developed an Air Quality Framework using advanced street-scale air quality modeling on the University of Birmingham's BlueBEAR cluster. This innovative approach, utilizing task farming to process sub-regions concurrently, informed strategies to improve air quality for 2.9 million residents and reduce an estimated 2,300 annual premature deaths attributed to air pollution in the region. The Open Science Platform OSPREY aims to enhance pandemic response by enabling health officials to utilize HPC resources and data-driven decision-making. Using Globus, Parsl, and EMEWS (Extreme-scale Model Exploration with Swift), it integrates automated workflows, data curation, and model management to facilitate rapid collaboration and development during health crises. Italy is implementing a new weather prediction system to address recent devastating floods. CINECA and the Italian MetOffice used Lenovo ThinkSystem servers with liquid cooling and Nvidia Infiniban. The system will provide nowcasting alerts for extreme weather and flooding and it will also contribute to the SKA deep space observatory project, potentially improving space exploration capabilities. An international research team is using AI/ML to accelerate analysis of permafrost thaw data, crucial for climate strategies. NCSA's Delta and DeltaAI GPUs will support automated workflows and AI models processing massive datasets. This research will help policymakers better understand and mitigate the global impacts of permafrost melt, which affects a quarter of the northern hemisphere's landmass. Carnegie Mellon researchers used PSC's Bridges-2 system to develop SCoFT, a method for retraining AI image generators to produce more culturally sensitive and accurate images. Using a small but diverse dataset, they improved the portrayal of underrepresented cultures, reducing stereotypes and potentially mitigating negative psychological impacts on viewers from those cultures. Question Title * 4. Best Use of HPC in Energy Atomic Canyon employs generative AI and HPE Cray EX supercomputers to modernize the nuclear energy sector, simplifying regulatory processes and enhancing document analysis. This approach reduces operational costs, improves workflow efficiency and compliance for existing plants, and accelerates new plant development, supporting a cleaner energy future. Simulations on TACC's Frontera and Lonestar6 supercomputers revealed unexpected vortex structures in polarons, which enhance electricity generation from sunlight. This discovery could lead to advancements in solar cells and eco-friendly LED lighting, with TACC experts playing a key role in optimizing code performance for the research. Penguin Solutions, AMD, and Shell have upgraded Shell's Houston data center with 864 dual-socket systems using 4th Gen AMD EPYC processors, achieving 165,888 cores while implementing immersion cooling technology. This innovative approach enhances performance and energy efficiency, resulting in up to 50% energy savings and reduced emissions, promoting sustainable data center operations. Realta Fusion is developing compact fusion systems to decarbonize industrial heat and power. Using AWS cloud resources, they simulated a novel plasma instability, advancing fusion science and potentially accelerating innovation in micro-reactors for fusion energy. Encored Technologies leverages AWS HPC services like EC2 Hpc6a/Hpc7g and ParallelCluster to run its weather forecasting system, processing 100TB of data for renewable energy prediction. Using AWS reduced development time from six months to three days, compared to on-premises infrastructure. A team of Florida Agricultural and Mechanical University (FAMU) researchers used ACCESS allocations on Stampede2 at the Texas Advanced Computing Center (TACC) and Expanse at the San Diego Supercomputer Center (SDSC) to model lithium-ion battery vent gas ignition and flame behavior. This validated computational approach can help battery manufacturers design safer systems and develop fire mitigation strategies more efficiently than physical prototyping alone. University of Illinois researchers used HPC resources to study the financial and environmental benefits of different biofuel crops for sustainable aviation fuel production. They aimed to determine optimal crops for each region and identify effective policies, potentially reducing petroleum dependence and mitigating climate change impacts from aviation. TotalCAE'S HPC cluster, featuring 5th Gen Intel Xeon processors and Mellanox InfiniBand, reduced CAE simulation times by up to 80%, enabling Hyliion to develop KARNO. This additive manufacturing-enabled generator technology uses heat to produce electricity, supporting electric truck infrastructure and converting flare gases from oil fields and landfills for a sustainable future. Nvidia collaborated with Shell Oil to develop Fourier Neural Operator-based surrogate models using Nvidia Modulus for real-time simulation of CO2 plume migration in carbon capture and storage. This work scales scientific machine learning to realistic 3D subsurface systems, advancing digital twin technology for CCS applications. Carnegie Clean Energy uses supercomputing and AI to optimize ocean wave energy capture and conversion efficiency. With successful deployments in Europe and Australia, their technology offers a viable clean energy source, contributing to the diversification of renewable energy and combating climate change. GENCI researchers used large eddy simulation on a supercomputer to model lithium-ion battery vent gas ignition and flame behavior. This validated computational approach can help battery manufacturers design safer systems and develop fire mitigation strategies more efficiently than physical prototyping alone. ExxonMobil uses an HPE Cray EX Supercomputer with direct liquid cooling to enhance subsurface imaging for oil and gas exploration. This advanced technology speeds up data processing, reduces risk, and doubles the company's discovery success rate, contributing to more efficient energy resource development. BP, a multinational energy company, operates across the oil and gas industry. They've pioneered liquid cooling in HPC, using Lenovo ThinkSystem SD650 V3 servers with Neptune technology and 4th Gen Intel Xeon processors. This approach has enabled BP to achieve 8x improvement in application performance. Question Title * 5. Best Use Of AI Methods for Augmenting HPC Applications MetadataHub uses AI to analyze and categorize scientific datasets, accelerating data discovery and preprocessing for HPC workflows. It transforms unstructured data into AI-ready assets, enhancing efficiency in fields like genomics and climate modeling. By providing higher-quality data to AI tools, MetadataHub improves accuracy and efficiency in large-scale scientific computing. Researchers from the startup Qubit Pharma developed GPU-accelerated machine learning strategies for high-resolution molecular dynamics simulations. These include enhanced sampling techniques and neural network potentials, implemented in software like Tinker-HP and FeNNol. The approach enabled large-scale simulations using GENCI and AWS facilities of complex biological systems, including a detailed study of the SARS-CoV-2 Spike protein, advancing potential therapeutic strategies against COVID-19. University of Washington researchers developed QLoRA, a memory-efficient finetuning approach that enables training a 65B parameter model on a single 48GB GPU. Their Guanaco model family achieves 99.3% of ChatGPT's performance on the Vicuna benchmark after just 24 hours of finetuning, potentially leveling the playing field between large corporations and smaller teams in AI development. Google DeepMind's GNoME, a deep learning model using graph neural networks, has accelerated materials discovery by predicting stability of new compounds. It identified 380,000 stable crystals with potential applications in green technologies, from improved batteries to efficient superconductors. Researchers from NCSA, UIUC and Ansys developed novel Deep Operator Network Architectures (DeepONets) to predict multiphysics solutions in manufacturing processes. Trained on NCSA's Delta supercomputer using Nvidia GPUs, these models can generate accurate thermal and mechanical field predictions up to 100,000 times faster than traditional methods, potentially revolutionizing design optimization and digital twin applications in engineering. DigiFarm uses deep neural networks and supercomputing to automate field boundary detection from high-resolution satellite data. This AI-powered approach enhances farming efficiency, optimizes resource allocation, and can reduce costs by up to 10% while boosting crop yields for farmers and agricultural enterprises globally. ByteLake's CFD Suite uses AI and GPU acceleration to dramatically reduce CFD simulation times, cutting chemical mixing simulations from 4-8 hours to 10-20 minutes while maintaining over 93% accuracy. The system utilizes Intel and Nvidia GPUs on Lenovo servers for training and deployment of AI solvers. University College London researchers are developing generative AI models to predict personalized neurological treatments. Using advanced HPE and Nvidia hardware, they aim to create comprehensive brain models that go beyond traditional statistical approaches, potentially enabling truly individualized patient care. Researchers from Carnegie Mellon and the University of British Colombia used AI and HPC to identify potential inhibitors for LRRK2, the primary genetic cause of familial Parkinson's disease. Their method narrowed 4.5 billion molecules to 14 promising candidates, potentially accelerating drug discovery for the disease affecting over 500,000 Americans. A multi-institutional team from UC Berkeley, PSC, and ACCESS used machine learning on PSC's Bridges-2 system to identify 50 gene regulatory elements in the brains of humans, bats, whales, and seals associated with vocal learning across mammals. This research revealed parallel genetic evolutionary paths for vocalization and a potential link to autism spectrum disorder in humans. Question Title * 6. Best Use of HPC in Financial Services Citadel Securities partnered with Google Cloud to enhance its quantitative research platform, leveraging high-performance computing for deeper data insights. This collaboration enabled massive scalability, tailored hardware solutions, and significantly reduced costs-per-research-hour through improved resource utilization and flexible consumption models. DBS Bank transformed its quant pricing and risk engines using in-house capabilities and AWS/Intel technologies. The cloud-native solution efficiently handles Monte Carlo simulations for derivative trading, integrating advanced models and APIs. Using AWS services and Intel technologies, it processes millions of risk pricings daily, improving customer response, risk management, and product delivery. Balyasny Asset Management (BAM) built a cloud-native Kubernetes platform on AWS for securities trading analysis. Using AWS Batch, EKS, and EC2, it incorporates logging, metrics, alerting, and diverse storage options. This platform enables BAM to run tens of thousands of concurrent jobs, greatly improving efficiency and decision-making. Question Title * 7. Best Use of HPC in Industry (Automotive, Aerospace, Manufacturing, Chemical, etc.) Aramco Americas, ANL, and Convergent Science worked on HPC-enabled simulations to co-optimize aftertreatment systems and engines for high efficiency and low emissions. This approach integrates detailed chemical kinetics and modeling techniques, revealing opportunities for improved catalyst design to meet emissions regulations and enhance human health. Allegro Microsystems partnered with Google Cloud, utilizing AMD EPYC CPUs to accelerate EDA processes and product development. This collaboration leverages high-performance computing infrastructure optimized for EDA workloads, enabling faster time to market and cost savings as Allegro modernizes its operations. Vitesco leveraged AWS HPC to accelerate automotive parts design for clean mobility. They reduced simulation time from one week to one day, scaled to 5x more cores, and cut costs by 55%. This cloud-based approach improved global collaboration and product development speed. GM Motorsports uses Rescale AI Physics to reduce engineering simulation time from days to seconds with over 99% accuracy, revolutionizing vehicle design. This collaboration accelerates R&D cycles, optimizing vehicle aerodynamics and performance while democratizing access to advanced AI technologies. Carnegie Mellon researchers used PSC's Bridges-2 system to create a virtual water treatment plant, enabling AI to learn from veteran engineers' responses to simulated breakdowns. This human-AI knowledge sharing aims to train new engineers and address potential staff shortages in water treatment facilities. University of Birmingham researchers developed a machine learning tool using evolutionary algorithms and simulations to optimize industrial equipment design. Using the BlueBEAR cluster, they achieved 40% energy efficiency improvement in a food sector mill and 90% reduction in an FMCG mixer's energy use, potentially transforming the path of manufacturing to Net Zero. Argonne National Lab and RTX Technology Research Center used HPC and CFD modeling to simulate gas turbine film cooling with surface roughness defects. They employed Argonne's GPU-accelerated NekRS solver on DOE supercomputers, providing high-fidelity data for developing surrogate models to optimize thermal management in next-gen aircraft engines. Bombardier is leveraging Google Cloud's HPC capabilities to design their next-generation EcoJet aircraft, using a hybrid approach that combines on-premises resources with cloud scalability. This innovative solution, utilizing Google Cloud's compute-optimized VMs, enables Bombardier to optimize resources, reduce costs, and accelerate time-to-market for their advanced aerodynamic designs. Question Title * 8. Best Use of High Performance Data Analytics & Artificial Intelligence Researchers at the University of Birmingham developed a high-resolution, GPU-accelerated flood forecasting model using SYCL and Intel OneAPI, enabling simulations on multi-vendor GPUs and exascale supercomputers. This model is being used to create a new probabilistic flood forecasting system for the UK, providing meter-scale resolution predictions to improve localized flood warnings and emergency responses. Saul is the first open-source family of AI models for law, developed using GENCI's Adastra supercomputer. This collaboration between Equall and academic labs resulted in models that match GPT-4's performance, showcasing the potential of specialized AI in the legal domain. NCSA's Center for Artificial Intelligence Innovation created UIUC.Chat, an AI chatbot that integrates with academic content to provide tailored assistance. Utilizing high-performance computing and cloud resources, it enhances student engagement by offering 24/7 support and flexible interactions in the learning environment. Researchers developed OpenFold, an open-source AI tool for protein structure prediction, using GPUs on TACC's Frontera and Lonestar6 supercomputers. This advancement could aid in developing new medicines and understanding protein-related neurodegenerative diseases like Parkinson's and Alzheimer's . Argonne National Laboratory and Dow Inc. developed a framework combining computational fluid dynamics with an active machine learning optimizer (ActivO) for efficient turbulent jet mixer design. This novel approach optimized jet-mixing technology, potentially reducing reliability issues and costs associated with traditional agitators, with estimated savings of up to $6.1 million per year per plant. Researchers developed an AI-driven workflow that integrates the Frontier supercomputer with the Oak Ridge National Laboratory Spallation Neutron Source for real-time decision-making in neutron scattering experiments. Using Redis, Kubernetes, and a temporal fusion Transformer, the system demonstrated a 30% potential reduction in neutron beamline time, accelerating scientific discovery and setting a new standard for future HPC ecosystems. SciCode, a collaborative effort by 11 institutions, is a benchmark of 338 subproblems across 80 main scientific challenges, designed to assess AI's ability to solve real-world scientific problems through code generation. Despite testing with top AI models, the best performance achieved only 4.6% success in the most realistic setting, highlighting the potential for SciCode to drive innovation in scientific AI and accelerate breakthroughs across diverse fields. NASA's Center for Climate Simulation implemented Vdura storage, dramatically improving system reliability and performance. This upgrade reduced storage administration time from 80% to 5% and increased Windows write speeds up to 78 times compared to the previous filesystem. VDURA's solution has enhanced data management efficiency, enabling faster insights and setting a new standard in high-performance computing for climate research. Carnegie Mellon researchers used AI on PSC's Bridges-2 supercomputer to analyze damaged typeface in Shakespeare's Fourth Folio, identifying three 17th-century English printers. This GPU-accelerated method, accessible to non-technical scholars, reveals shifts in the status of Shakespeare, publishers, and printers during a period of expanding individual expression in print. The San Diego Supercomputer Center and University of Utah are launching a $6 million NSF-funded National Data Platform to create an equitable data ecosystem. This initiative aims to enhance access to scientific data, fostering innovation and collaboration while addressing global challenges like climate change through AI-integrated solutions. Question Title * 9. Best HPC Storage Product or Technology MetadataHub Amazon FSx for NetApp ONTAP HPE Cray Storage Systems C500 Globus Connect Server Vdura Data Platform Arcitecta Mediaflux® Universal Data System The Vast Data Platform WEKApod Google Cloud Hyperdisk ML BeeGFS Starfish Storage: Holistic Unstructured Data Management A3I Storage Platform DDN Exascaler Parallelstore, Google’s new first-party parallel filesystem (PFS) Question Title * 10. Best HPC Cloud Platform HPC Cloud On-Demand Data Center by Adaptive Computing Massachusetts Open Cloud (MOC) Google Cloud The Rescale Platform Parallel Works ACTIVATE Shakti Cloud Amazon Web Services Question Title * 11. Best AI Product or Technology Adaptive AI as a Service Amazon Bedrock Ansys TwinAI Cerebras Systems CS-3 AI System G-Assist Hammerspace Global Data Platform HPE Cray XD670 with NVIDIA H200 or NVIDIA H100 Tensor Core GPUs Intel Gaudi AI Accelerators MetadataHub National Science Foundation-funded ACES (Accelerating Computing for Emerging Sciences) high-performance testbed at Texas A&M University NVIDIA ACE NVIDIA GB200 NLV72 NVIDIA GH200 NVIDIA NeMo NVIDIA NIM NVIDIA Project GR00T Penguin Solution’s OriginAI® Question Title * 12. Best Use of HPC in the Cloud (Use Case) Adaptive Computing's HPC Cloud On-Demand Data Center (ODDC) solution enables rapid deployment of temporary or persistent HPC resources. It includes Moab HPC Suite, Adaptive Cluster Manager, Moab Accounting Manager, and Viewpoint job portal. ODDC allows workload bursting to the cloud during peak demand, provides disaster recovery, and offers a reliable backup, mirroring on-premise environments for seamless operations. Exscientia leverages AWS to create an AI-driven drug discovery platform, integrating generative AI design with robotic automation. This approach accelerates drug design by 70% and reduces costs by 80%, with six AI-designed molecules already in clinical trials. Globus enabled near real-time data analysis at Argonne National Laboratory by connecting APS instruments with ALCF supercomputers. This automated pipeline allows scientists to adjust experiments on the fly, potentially accelerating scientific breakthroughs by delivering rapid results while researchers still have facility access. CoreWeave partnered with Vast Data to build an Nvidia-powered cloud for HPC and AI workloads. Using Nvidia technology, they're developing a new data platform for large-scale AI pipelines. This collaboration provides CoreWeave with a fast, cost-effective, multi-tenant environment for accelerated computing, outperforming legacy cloud providers by up to 35 times in speed and 80% in cost. Vast Data and Lambda partnered to create a hybrid cloud platform for AI workloads. Lambda chose Vast's Data Platform for its GPU Cloud, offering optimized LLM training and inference across clouds. Customers gain access to Vast DataSpace for high-performance data management, aiming to accelerate AI development and global collaboration. G42 Cloud and Vast Data partnered to create a scalable AI computing platform. Vast's solution powers G42's multi-architecture HPC system, including Condor Galaxy, supporting ExaFLOP-level AI capability. The platform offers scalability, multi-tenancy, and security for various industries across the Middle East. The Institut Pasteur's "IndexThePlanet" project aims to analyze and map the DNA of all living organisms to identify potential pandemic threats. Partnering with AWS, they processed 20 petabytes of data using a cluster of 2.18 million vCPUs, expanding on their previous Serratus project to catalog viruses and prepare for future pandemics. Question Title * 13. Best HPC Programming Tool or Technology Nvidia RAPIDS cuDF pandas CUDA-Q European Environment for Scientific Software Installations (EESSI) Unified Acceleration Foundation (UXL), an open industry evolution of the oneAPI initiative Google Cloud’s Cluster Toolkit Lenovo Intelligent Computing Orchestration (LiCO) 7.2.1 IRIS-SDK: Intelligent Runtime System for Extremely Heterogeneous Computer Architectures Globus Compute Spack package management tool Question Title * 14. Best HPC Server Product or Technology Lenovo ThinkSystem SR780a with Neptune Liquid Cooling Technology Gridware Cluster Scheduler Intel Xeon 6processors Project Ceiba (GB200) Micron MRDIMM memory HPE Cray EX supercomputing solution NVIDIA DGX SuperPOD NVIDIA Grace CPU Superchip Question Title * 15. Best HPC Interconnect Product or Technology Ayar Labs In-Package Optical I/O Aries 6 Smart DSP Retimers for PCIe 6.x/CXL 3.x Photonic Fabric™ Elastic Fabric Adapter (EFA) NVIDIA Quantum-X800 Aries Smart Cable Modules (SCMs) for PCIe 5.0/CXL 2.0 Leo CXL Smart Memory Controllers HPE Slingshot Question Title * 16. Best HPC Collaboration (Academia/Government/Industry) Georgia Tech's College of Engineering, in partnership with Nvidia and Penguin Solutions, has launched the AI Makerspace, a supercomputer hub providing students with access to advanced AI resources. This initiative democratizes access to high-performance computing, preparing students for AI careers through hands-on experience with cutting-edge technology typically reserved for researchers and tech companies. The ARF supercomputer, named after mathematician Cahit Arf, ranks 313th on the TOP500 list. Located at METU in Ankara, Turkey, it's a Lenovo system with Intel Xeon processors and Nvidia Infiniband. This collaborative project involving TUBITAK, METU, the Turkish government, GEANT, and Lenovo will support multidisciplinary research in fields like manufacturing, pharmaceuticals, bioinformatics, and earth sciences. The Delta HPC enabled a Carnegie Mellon-Honda collaboration to develop competitive AI models rivaling big tech companies, while maintaining transparency. Using technologies like Deep Learning and PyTorch, the project aims to enhance understanding of Conversational AI's learning processes and expand its language capabilities. Contributors from academia and industry collaborate on the open-source espnet platform. HLRS and Seedbox are collaborating to develop secure and cost-efficient AI solutions using HLRS’s supercomputing systems, including HPE Apollo with 192 Nvidia A100 GPUs. Their partnership aims to support German companies, particularly SMEs, in utilizing advanced AI models and supercomputing infrastructures, contributing to a competitive European AI ecosystem. Spack-stack is a collaborative software infrastructure project among major weather research organizations like NOAA EMC, JCSDA, EPIC, and NRL. Using Spack package manager, GitHub, and Jenkins CI, it provides consistent software deployments for large-scale weather forecasting workflows across HPC and cloud platforms. The project supports key applications like JEDI and UFS, enabling robust deployments for major forecasting models on diverse platforms including AWS, GCP, and Azure. USGS collaborated with Globus to modernize ScienceBase, a data platform, collaboration workspace, data store, and machine accessible location for data and information resources used by researchers. UC Riverside's collaboration with Google Cloud has revolutionized research capabilities, utilizing tools like the HPC Toolkit and workload-optimized VMs. According to Director Chuck Forsyth, this partnership has enabled researchers to achieve previously impossible goals in record time, transforming UCR's research landscape. Stanford Doerr School of Sustainability leverages Google Cloud's HPC Toolkit to rapidly deploy scalable, customizable computing environments for researchers. The school utilizes technologies like Chrome Remote Desktop and Vertex AI, enabling seamless integration of on-premises and cloud resources to accelerate scientific discovery while optimizing performance and cost-efficiency. Cineca is expanding its Galileo100 system with a 50PB Vast Data Platform to support diverse computing needs and explore data-centric analytics and AI. This multi-protocol data lake will connect to the Leonardo supercomputer and other infrastructures, addressing new data challenges in HPC, AI, and big data analytics. The upgrade aims to consolidate infrastructure, improve automation, and enhance data isolation for users in a zero-trust environment. The NSF has opened access to SDSC AI research resources at multiple universities and supercomputing centers, including contributions from major tech companies. These resources encompass advanced computing systems, cloud platforms, foundation models, software tools, and educational platforms. This initiative aims to expand partnerships and investments to realize the full potential of the National AI Research Resource (NAIRR) for societal benefit. DigiFarm uses AI and supercomputing to revolutionize agriculture globally. Utilizing the LUMI supercomputer with HPE Cray EX 4000 and ClusterStor E1000 systems, they process high-resolution satellite data to accurately detect field boundaries and seeded areas. This technology helps farmers and agribusinesses optimize resource allocation, potentially reducing costs by 10% while increasing crop yields and efficiency. Carnegie Clean Energy uses AI and supercomputing to efficiently extract ocean wave energy, advancing renewable energy adoption. Their CETO technology, optimized with HPE Cray EX supercomputers, has been successfully deployed in Europe and Australia. This approach accelerates the diversification of clean energy sources and improves conversion efficiency to combat climate change. Question Title * 17. Top HPC-Enabled Scientific Achievement Destination Earth Climate Digital Twin is the first global climate simulation performed at this fine spatial resolution (5 km) over multiple decades. The simulations enable evaluating impacts of climate change on local and regional scales and can support decision-making on climate change adaptation and mitigation. The simulations were completed in 2024. A multi-institutional team used machine learning on PSC's Bridges-2 system to identify 50 gene regulatory elements in the brains of humans, bats, whales, and seals associated with vocal learning across mammals. This research revealed parallel genetic evolutionary paths for vocalization and a potential link to autism spectrum disorder in humans. Floating-point non-associativity in parallel programs causes run-by-run variability, affecting reproducibility in iterative algorithms and deep learning pipelines. This from ORNL study investigated FPNA's impact on GPU reductions and DL processes, analyzed alternatives, and evaluated deterministic hardware solutions to improve reproducibility and correctness in scientific computing and DL applications. Microsoft and PNNL used AI and cloud computing to rapidly identify a promising new battery material. Their Azure Quantum Elements platform screened 32 million candidates in just 80 hours, leading to a solid-state electrolyte that could use 70% less lithium. While still in early stages, this AI-driven approach demonstrates potential to dramatically accelerate materials discovery for more sustainable and safer batteries. Researchers believe this method could compress centuries of chemistry research into decades, addressing urgent energy and environmental challenges. Researchers led by Prof. Dr. Volker Springel at the Max Planck Institute of Astrophysics created a large-scale simulation of dark matter in a 10 billion light-year cube. By incorporating neutrinos, they modeled galaxy formation to extrapolate universal effects, aiming to better understand the universe's origins and dark matter's properties. Question Title * 18. Top Energy-Efficient HPC Achievements The Massachusetts Green High Performance Computing Center (MGHPCC), which achieved LEED Platinum Certification for sustainability, houses multiple TOP500 systems for leading research institutions. The facility uses predominantly carbon-free energy sources, including local hydroelectric and solar power, and employs liquid cooling to further reduce its environmental impact. By migrating their HPC workloads to AWS, Toyota Gazoo Racing Europe were able to achieve a 95% reduction in their HPC carbon footprint while also seeing a 10% increase in processing speed. Crusoe Energy powers AI, blockchain and cryptocurrencies, and HPC research with an industrial waste product, reducing oil and gas industry carbon footprint from methane flaring. Shearwater Geoservices migrated 60% of its global HPC infrastructure from the UK to atNorth's ICE02 data center in Iceland. This move resulted in a 92% reduction in CO2 emissions and 85% cost savings. Despite initial concerns about latency, comprehensive testing and collaboration ensured a successful transition. The migration was driven by rising energy costs in the UK and atNorth's environmentally responsible approach, with ICE02's PUE of 1.2 compared to 1.8 at the previous UK provider. Question Title * 19. Top Supercomputing Achievement Cerebras and G42 have built the third cluster of their constellation of nine AI supercomputers, the Condor Galaxy 3. Featuring 64 Cerebras CS-3 AI systems, CG-3 delivers eight exaFLOPs of AI with 58 million AI-optimized cores, bringing the total of the Condor Galaxy network to 16 exaFLOPs that can fundamentally transform the worldwide inventory of AI compute. The Aurora supercomputer at Argonne National Laboratory showcases AI's growing impact in supercomputing. With its massive GPU cluster and advanced interconnect, Aurora enables AI-driven research across fields like neuroscience, particle physics, and drug discovery. This system demonstrates how AI can be effectively implemented for high-performance computing, accelerating scientific breakthroughs. Texascale Days on TACC's Frontera supercomputer gives scientists full access to over 8,000 nodes on the most powerful supercomputer at any U.S. university. Domain science performed ranged from stellar convection to cosmological simulations and 2D material simulations. Experiences gained from Texascale Days help improve preparations for future AI-based systems ORBIT, the ORNL Base Foundation Model for Earth System Predictability, is a 113 billion-parameter AI vision transformer model that leverages the Frontier supercomputer's 49,152 AMD GPUs to achieve 1.6 exaFLOP performance. Using novel parallelism techniques, it sets new benchmarks in AI-driven climate modeling, improving prediction accuracy and enabling more informed decision-making for climate change challenges. Question Title * 20. Top 5 New Products or Technologies to Watch (Please pick 5) Lenovo ThinkSystem SC750 V4 Nvidia Blackwell GPU Open OnDemand Micron Multiplexed Rank Dual Inline Memory Module (MR DIMM) Gryf: The First-ever Suitcase-sized Edge AI System AWS Parallel Computing Service Hammerspace Nvidia Spectrum-X AMD MI300 GPU DDN Infinia MetadataHub Nvidia Quantum-X800 Adaptive AI as a Service CUDA-Q BullSequana AI 1200H Vdura Data Platform Question Title * 21. Top 5 Vendors to Watch (Please pick 5) Adaptive Computing Amazon Web Services (AWS) AMD Ansys Arcitecta Ayar Labs BOXX Technologies Broadcom CoolIT Systems Cornelis Networks DDN Dell Technologies Enfabrica Eviden Google Cloud Hammerspace Lenovo Micron NextSilicon Nvidia Penguin Solutions SourceCode Supermicro ThinkParQ GmBH Vdura Weka Question Title * 22. Workforce Diversity & Inclusion Leadership Award KAUST received the 'Champion for Diversity, Equity, and Inclusion in Content' award at the 2023 ContentEd Conference. The university boasts a diverse community from over 120 nationalities, with supercomputing playing a crucial role in research. Notably, 70% of faculty use the Shaheen supercomputer, and 40% of students are women. AJ Lauer, owner of Thriving Ibis Leadership Solutions, is a prominent expert in HPC workforce development and DEI. With nearly a decade of experience, she has led initiatives at NCAR, SC conferences, and Women in HPC. Her recent work includes research, consulting, podcasts, workshops, and coaching on inclusivity in HPC. Welcoming the unknowns that come with being early adopters of HPC technology, and taking on the task of bringing up and managing highly complex systems, are hallmarks of the BP HPC team. BP itself has been consistently recognized for its leadership in diversity and inclusion, earning multiple awards and recognition over the last decade as a leader in LGBTQ employee rights. The female-led Student Training and Engagement Program (STEP) offers year-long internships in research computing, introducing diverse students to HPC resources. Interns work on-site at computing centers, using ACCESS resources. In its second year, the program has already placed graduates in full-time CI operations roles and received record applications, aiming to grow and diversify the field. The San Diego Supercomputer Center has developed a new training program called COMPrehensive Learning for end-users to Effectively utilize Cyberinfrastructure (COMPLECS), with a goal to recruit participants from underrepresented groups and domains, to assist domain scientists and applied mathematicians with learning the ins and outs of supercomputing. STEM-Trek Nonprofit is a charity supporting STEM workforce development through travel, mentoring, and education. It encourages beneficiaries to pay it forward and bridges the gap between STEM scholars and low-tech jobseekers. The organization hosts pre-conference workshops annually, with ART@SC24 supporting 30 participants from eight countries. Vdura has intensified efforts to promote diversity and inclusion in HPC through inclusive hiring practices, expanded Employee Resource Groups, mentorship programs, and increased diversity in leadership. These initiatives aim to foster innovation, support employee growth, and reflect the diverse world VDURA serves, recognizing that a diverse workforce is crucial for success in the competitive HPC industry. The Technology Training for Non-Traditionals (TNTs) program, sponsored by Texas Advanced Computing Center (TACC) is a year-long initiative to diversify the cyberinfrastructure workforce. It offers mentoring, hands-on experience, and networking opportunities, including participation in the Supercomputing Conference's SCinet. The NSF-supported program provides technical and soft skills training in advanced technologies and career development. Question Title * 23. Outstanding Leadership in HPC (Please select top choice and add a 2nd choice in comment field below) Christian Trott's leadership of the Kokkos performance portability team has provided a critical tool for HPC applications to effectively use Top 10 computing systems. Winona Snapp-Childs, a research scientist at Indiana University's Biology Department and COO of its Pervasive Technology Institute, leads the ACCESS STEP program. This initiative encourages diverse students to pursue careers in cyberinfrastructure operations. The program has already placed graduates in full-time CI operations roles, showcasing Winona's commitment to fostering inclusion and growth in HPC. Bryan Johnston's leadership at the Centre for High-Performance Computing has been instrumental in advancing HPC across Africa. He has developed crucial infrastructure, fostered pan-African collaboration, implemented training programs, and promoted HPC awareness. His efforts have positioned Africa as a significant player in the global HPC community. Sergi Girona has been a driving force behind BSC's MareNostrum systems and a visionary leader in the European HPC community for nearly two decades. He is a sought-after speaker at EuroHPC events, advocating for advancements in both technology and community collaboration. Thomas Schultess was able to get the Alps system assembled, fully deployed, and made available to run full-scale, Gordon Bell submissions in just eight months Elizabeth Le'Heureux, a successful engineer and biophysicist at BP, leads a strong HPC team known for embracing cutting-edge technology and managing complex systems. BP has received numerous awards for its leadership in diversity and inclusion, particularly in LGBTQ employee rights, over the past decade. Frank Würthwein is the Director of the San Diego Supercomputer Center and Executive Director of the Open Science Grid. A UC San Diego physics professor, he focuses on experimental particle physics at the Large Hadron Collider, researching dark matter and supersymmetry while developing global high-throughput computing systems. Since 2022, David Keyes has been a finalist for the ACM Gordon Bell Prize utilizing leadership-scale supercomputers such as Fugaku, Frontier, and Shaheen, and partnering with industry leaders like Cerebras, Nvidia, and HPE. These achievements were demonstrated across various real scientific applications, including geostatistics, seismic analysis, climate modeling, and biology. Dr. Kimmo Koski has led Finland's CSC - IT Center for Science for 20 years, transforming it into a leading HPC center with 700 employees and Europe's fastest supercomputer. He has also been instrumental in developing European HPC collaboration, contributing to initiatives like EuroHPC Joint Undertaking and EUDAT. Question Title * 24. Thanks for your participation! PLEASE NOTE: We require a name, organization, and email address in order to help prevent ballot box stuffing. Your responses will remain anonymous. Full Name Organization Email Address Done >>