RAPIDS 24.12 Introduces cuDF on PyPI, CUDA Unified Memory for Polars, and Faster GNNs

Unlocking Faster Data Processing: RAPIDS 24.12 Introduces cuDF on PyPI and CUDA Unified Memory for Polars Summary The latest release of RAPIDS, version 24.12, brings significant enhancements to data processing capabilities. Key highlights include the introduction of cuDF packages on PyPI, improved performance for groupby aggregations and AWS S3 file reading, and the integration of CUDA Unified Memory for the Polars GPU engine. This update enables larger-than-GPU memory queries and faster graph neural network (GNN) training on real-world graphs....

May 22, 2023 · Emmy Wolf

QHack Results Highlight Quantum Computing Applications and Tools on GPUs

Unlocking Quantum Computing Potential: QHack 2023 Highlights Summary The QHack 2023 event, the world’s largest quantum machine learning hackathon, brought together 2,850 participants from 105 countries to develop innovative quantum computing applications using NVIDIA’s quantum technology. This article explores the key outcomes and projects from the event, highlighting the potential of quantum computing and the role of NVIDIA’s CUDA-Q platform in accelerating quantum-classical computing applications. The Power of Quantum Computing Quantum computing has the potential to revolutionize various fields, from chemistry to finance, by solving complex problems that are beyond the capabilities of classical computers....

May 18, 2023 · Tony Redgrave

Accelerating into AI with VDI

Summary This article explores how Virtual Desktop Infrastructure (VDI) can be leveraged to accelerate AI development. It highlights the key elements needed to build an AI virtual workstation from a data center, including Tensor Core GPUs, NVIDIA vGPU software, and AI vWS Toolkits. The article also discusses the benefits of using VDI for AI development, such as cost-effective implementation, flexibility and manageability, data and security, and smooth transition to production....

May 17, 2023 · Pablo Escobar

Accelerating Medical Image Segmentation with Tensor Cores and TensorFlow 2

Unlocking Medical Image Segmentation with NVIDIA Tensor Cores and TensorFlow 2 Summary: Medical image segmentation is a critical process in healthcare that involves separating patient scan data images into regions based on different properties. This process is essential for diagnosis, treatment planning, and understanding individual anatomy. With advancements in deep learning, medical image segmentation can be accelerated using NVIDIA Tensor Cores and TensorFlow 2, providing high accuracy and performance. This article explores how NVIDIA Tensor Cores and TensorFlow 2 can enhance medical image segmentation, focusing on the UNet model, a convolutional neural network designed for biomedical image segmentation....

May 15, 2023 · Tony Redgrave

AI Helps Classify Lung Cancer at Pathologist Level

How AI is Revolutionizing Lung Cancer Diagnosis Lung cancer is one of the most common and deadly forms of cancer worldwide. Early detection and accurate diagnosis are crucial for effective treatment and improved patient outcomes. However, distinguishing between different types of lung cancer can be challenging, even for experienced pathologists. This is where artificial intelligence (AI) comes into play, offering a promising solution to improve lung cancer diagnosis. The Challenge of Lung Cancer Diagnosis Lung cancer is typically diagnosed through the examination of tissue samples under a microscope....

May 15, 2023 · Tony Redgrave

SC20 Demos: New Nsight Systems and Nsight Compute Demos

Unlocking CUDA Performance: A Deep Dive into NVIDIA Nsight Compute and Nsight Systems Summary NVIDIA Nsight Compute and Nsight Systems are powerful tools designed to help developers analyze and optimize CUDA applications. This article explores the latest features and capabilities of these tools, including advanced profiling techniques, performance analysis methods, and optimization strategies. By leveraging Nsight Compute and Nsight Systems, developers can unlock the full potential of their CUDA applications and achieve significant performance gains....

May 15, 2023 · Carl Corey

Unlocking New Opportunities with AI Cloud Infrastructure for 5G vRAN

Unlocking New Opportunities: How AI Cloud Infrastructure Revolutionizes 5G vRAN Summary: The convergence of AI, cloud computing, and 5G technology is transforming the telecommunications industry. By leveraging AI cloud infrastructure for 5G virtual Radio Access Networks (vRAN), telecom operators can improve operational efficiency, reduce costs, and unlock new revenue streams. This article explores the key concepts and benefits of integrating AI cloud infrastructure with 5G vRAN. The Challenge of Underutilization Traditional 5G vRAN deployments often suffer from underutilization, where dedicated hardware and infrastructure are not fully utilized....

May 12, 2023 · Tony Redgrave

RTX Backstage: 10 Questions for Nixxes About the Awesome PC Tech in Shadow of the Tomb Raider

Summary Shadow of the Tomb Raider is a game that has pushed the boundaries of PC gaming technology, particularly with its use of NVIDIA’s RTX features. In this article, we delve into the details of how Nixxes, the game’s developer, worked with NVIDIA to integrate these cutting-edge technologies into the game. From real-time ray tracing to DLSS (Deep Learning Super Sampling), we explore how these features enhance the gaming experience and what it means for the future of game development....

May 1, 2023 · Emmy Wolf

Making GPU I/O Scream on Platforms of Today and Tomorrow

Making GPU I/O Scream on Today’s and Tomorrow’s Platforms Summary In the world of high-performance computing, the ability to efficiently move data between storage and GPU memory is crucial. NVIDIA’s Magnum IO GPUDirect Storage (GDS) technology is designed to optimize this process by enabling direct data transfers between storage and GPU memory, bypassing the CPU. This article explores how GDS works, its performance optimizations, and how it can be used to boost application performance....

May 1, 2023 · Pablo Escobar

High-Performance Storage on NVIDIA DGX Cloud with Oracle Cloud Infrastructure

Summary: NVIDIA and Oracle have partnered to offer high-performance storage on NVIDIA DGX Cloud using Oracle Cloud Infrastructure (OCI). This collaboration enables enterprises to access scalable, durable, and secure storage services for demanding AI and machine learning workloads. The solution leverages NVIDIA’s NVMesh software and Oracle’s bare-metal infrastructure to provide high-performance storage that matches on-premises environments. High-Performance Storage for AI Workloads NVIDIA DGX Cloud is a multi-node AI-training-as-a-service that allows enterprises to train AI models without the need for procuring and installing a supercomputer....

April 18, 2023 · Emmy Wolf