NVIDIA A100 PCIe / GPU computing processor / PCIe 4.0 | 900-21001-0000-000

900-21001-0000-000
11,542.73
12,546.45 (13,735.85 με Φ.Π.Α.)
Μη Διαθέσιμο
Delivery
Payment options
Product CodeL382370 ManufacturerNvidia

NVIDIA A100 PCIe / GPU computing processor / PCIe 4.0 | 900-21001-0000-000

Description

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC to tackle the world's toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances to accelerate workloads of all sizes. A100's third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market.

NVIDIA A100 PCIe / GPU computing processor | 900-21001-0000-000

Special Features

  • NVIDIA Ampere architecture
  • HBM2
  • Multi-instance GPU (MIG)
  • Structural sparsity

Product features

  • NVIDIA Ampere architecture
    A100 accelerates workloads big and small. Whether using MIG to partition an A100 GPU into smaller instances, or NVLink to connect multiple GPUs to accelerate large-scale workloads, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100's versatility means IT managers can maximize the utility of every GPU in their data center around the clock.
  • HBM2
    With 40 gigabytes (GB) of high-bandwidth memory (HBM2), A100 delivers improved raw bandwidth of 1.6TB/sec, as well as higher dynamic random-access memory (DRAM) utilization efficiency at 95 percent. A100 delivers 1.7X higher memory bandwidth over the previous generation.
  • Multi-instance GPU (MIG)
    An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.
  • Structural sparsity
    AI networks are big, having millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros to make the models "sparse" without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.
 

Product Specification

Device Type GPU computing processor
Bus Type PCI Express 4.0

Χαρακτηριστικά
Product Code
L382370
EAN Code
-
Manufacturer
Nvidia
Possibly you may be interested
  • Most Popular
  • Bestsellers
  • Recently Viewed
 
MacBook
Thinner takes all
More muscle in every millimeter
Smart speaker, HomePod
Сontrol your smart home — all with just your voice.