Payment Terms | L/C, D/A, D/P, T/T |
Supply Ability | 20pcs |
Delivery Time | 15-30 word days |
Packaging Details | 4.4” H x 7.9” L Single Slot |
NAME | Professional Computing Nvidia Ampere A30 GPU Data Center Solution |
Keyword | Professional Computing Nvidia Ampere A30 GPU Data Center Solution |
Model | NVIDIA A30 |
FP64 | 5.2 teraFLOPS |
FP64 Tensor Core | 10.3 teraFLOPS |
FP32 | 0.3 teraFLOPS |
GPU memory | 24GB HBM2 |
GPU memory bandwidth | 933GB/s |
Form factors | Dual-slot, full-height, full-length (FHFL) |
Max thermal design power (TDP) | 165W |
FP16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
INT8 Tensor Core | 330 TOPS | 661 TOPS* |
Interconnect | PCIe Gen4: 64GB/s |
Brand Name | NVIDIA |
Model Number | NVIDIA A30 |
Place of Origin | China |
View Detail Information
Explore similar products
Leadtek WinFast GS2040T Nvidia GPU Server Dual Port 10GbE
1 VGA 3d Rendering Server Leadtek WinFast GS2040T
48G GDDR6 3xDP 384 Bit Nvidia GPU Server Tesla A40 For Computing Graphics Card
Datacenter GDDR6 Nvidia Tesla T4 16GB Scientific Card Deep Learning Edge
Product Specification
Payment Terms | L/C, D/A, D/P, T/T | Supply Ability | 20pcs |
Delivery Time | 15-30 word days | Packaging Details | 4.4” H x 7.9” L Single Slot |
NAME | Professional Computing Nvidia Ampere A30 GPU Data Center Solution | Keyword | Professional Computing Nvidia Ampere A30 GPU Data Center Solution |
Model | NVIDIA A30 | FP64 | 5.2 teraFLOPS |
FP64 Tensor Core | 10.3 teraFLOPS | FP32 | 0.3 teraFLOPS |
GPU memory | 24GB HBM2 | GPU memory bandwidth | 933GB/s |
Form factors | Dual-slot, full-height, full-length (FHFL) | Max thermal design power (TDP) | 165W |
FP16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* | INT8 Tensor Core | 330 TOPS | 661 TOPS* |
Interconnect | PCIe Gen4: 64GB/s | Brand Name | NVIDIA |
Model Number | NVIDIA A30 | Place of Origin | China |
High Light | Computing nvidia ampere a30 ,nvidia ampere a30 GPU ,Data Center nvidia a30 gpu |
Professional Computing Nvidia Ampere A30 GPU Data Center Solution
GPU Data Center Solution For Scientific Computing
NVIDIA A30 Data Center Solution
The Data Center Solution for Modern IT
The NVIDIA Ampere architecture is part of the unified NVIDIA EGX™ platform, incorporating building blocks across hardware, networking, software, libraries, and optimized AI models and applications from the NVIDIA NGC™ catalog. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.
NVIDIA Tesla A30 leverages groundbreaking features to optimize inference workloads. It accelerates a full range of precisions, from FP64 to TF32 and INT4. Supporting up to four MIGs per GPU, A30 lets multiple networks operate simultaneously in secure hardware partitions with guaranteed quality of service (QoS). And structural sparsity support delivers up to 2X more performance on top of A30’s other inference performance gains.
NVIDIA’s market-leading AI performance was demonstrated in MLPerf Inference. Combined with NVIDIA Triton™ Inference Server, which easily deploys AI at scale, NVIDIA Tesla A30 brings this groundbreaking performance to every enterprise.
NVIDIA A30 Data Center Solution
NVIDIA Ampere GPU architecture
24GB HBM2 Memory
Max. Power Consumption: 165W
Interconnect Bus:
PCIe Gen. 4: 64GB/s
3rd NVLink: 200GB/s
Thermal Solution: Passive
Multi-instance GPU (MIG):
4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB
Virtual GPU (vGPU) software support
NVIDIA A30 Data Center Solution Technical Specifications
GPU Architecture | NVIDIA Ampere |
FP64 | 5.2 teraFLOPS |
FP64 Tensor Core | 10.3 teraFLOPS |
FP32 | 10.3 teraFLOPS |
TF32 Tensor Core | 82 teraFLOPS | 165 teraFLOPS* |
BFLOAT16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
FP16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
INT8 Tensor Core | 330 TOPS | 661 TOPS* |
INT4 Tensor Core | 661 TOPS | 1321 TOPS* |
Media engines | 1 optical flow accelerator (OFA) |
GPU memory | 24GB HBM2 |
GPU Memory Bandwidth | 933 GB/s |
Interconnect | PCIe Gen4: 64GB/s |
Max thermal design power (TDP) | 165W |
Form Factor | Dual-slot, full-height, full-length (FHFL) |
Multi-Instance GPU (MIG) | 4 GPU instances @ 6GB each |
Virtual GPU (vGPU) software support | NVIDIA AI Enterprise for VMware |
Company Details
Business Type:
Manufacturer,Agent,Importer,Exporter,Trading Company
Year Established:
2009
Total Annual:
1500000-2000000
Employee Number:
10~30
Ecer Certification:
Active Member
Beijing Plink-Ai is a high-tech company integrating research and development, production and sales and one of the industry and civil intelligence device wholly solution supplier. Founded in 2009, Beijing Plink-AI always persists on ‘creation+efficiency+Intelligence’ core theory, focuses ... Beijing Plink-Ai is a high-tech company integrating research and development, production and sales and one of the industry and civil intelligence device wholly solution supplier. Founded in 2009, Beijing Plink-AI always persists on ‘creation+efficiency+Intelligence’ core theory, focuses ...
Get in touch with us
Leave a Message, we will call you back quickly!