China factories

China factory - Beijing Plink AI Technology Co., Ltd

Beijing Plink AI Technology Co., Ltd

  • China,Beijing ,Beijing
  • Active Member

Leave a Message

we will call you back quickly!

Submit Requirement
China Professional Computing Nvidia Ampere A30 GPU Data Center Solution
China Professional Computing Nvidia Ampere A30 GPU Data Center Solution

  1. China Professional Computing Nvidia Ampere A30 GPU Data Center Solution
  2. China Professional Computing Nvidia Ampere A30 GPU Data Center Solution

Professional Computing Nvidia Ampere A30 GPU Data Center Solution

  1. MOQ: 1pcs
  2. Price: $490-$520
  3. Get Latest Price
Payment Terms L/C, D/A, D/P, T/T
Supply Ability 20pcs
Delivery Time 15-30 word days
Packaging Details 4.4” H x 7.9” L Single Slot
NAME Professional Computing Nvidia Ampere A30 GPU Data Center Solution
Keyword Professional Computing Nvidia Ampere A30 GPU Data Center Solution
Model NVIDIA A30
FP64 5.2 teraFLOPS
FP64 Tensor Core 10.3 teraFLOPS
FP32 0.3 teraFLOPS
GPU memory 24GB HBM2
GPU memory bandwidth 933GB/s
Form factors Dual-slot, full-height, full-length (FHFL)
Max thermal design power (TDP) 165W
FP16 Tensor Core 165 teraFLOPS | 330 teraFLOPS*
INT8 Tensor Core 330 TOPS | 661 TOPS*
Interconnect PCIe Gen4: 64GB/s
Brand Name NVIDIA
Model Number NVIDIA A30
Place of Origin China

View Detail Information

Contact Now Ask for best deal
Get Latest Price Request a quote
  1. Product Details
  2. Company Details

Product Specification

Payment Terms L/C, D/A, D/P, T/T Supply Ability 20pcs
Delivery Time 15-30 word days Packaging Details 4.4” H x 7.9” L Single Slot
NAME Professional Computing Nvidia Ampere A30 GPU Data Center Solution Keyword Professional Computing Nvidia Ampere A30 GPU Data Center Solution
Model NVIDIA A30 FP64 5.2 teraFLOPS
FP64 Tensor Core 10.3 teraFLOPS FP32 0.3 teraFLOPS
GPU memory 24GB HBM2 GPU memory bandwidth 933GB/s
Form factors Dual-slot, full-height, full-length (FHFL) Max thermal design power (TDP) 165W
FP16 Tensor Core 165 teraFLOPS | 330 teraFLOPS* INT8 Tensor Core 330 TOPS | 661 TOPS*
Interconnect PCIe Gen4: 64GB/s Brand Name NVIDIA
Model Number NVIDIA A30 Place of Origin China
High Light Computing nvidia ampere a30nvidia ampere a30 GPUData Center nvidia a30 gpu

Professional Computing Nvidia Ampere A30 GPU Data Center Solution
 

GPU Data Center Solution For Scientific Computing
NVIDIA A30 Data Center Solution
 

The Data Center Solution for Modern IT
 
The NVIDIA Ampere architecture is part of the unified NVIDIA EGX™ platform, incorporating building blocks across hardware, networking, software, libraries, and optimized AI models and applications from the NVIDIA NGC™ catalog. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.
 

Deep Learning Inference

 
NVIDIA Tesla A30 leverages groundbreaking features to optimize inference workloads. It accelerates a full range of precisions, from FP64 to TF32 and INT4. Supporting up to four MIGs per GPU, A30 lets multiple networks operate simultaneously in secure hardware partitions with guaranteed quality of service (QoS). And structural sparsity support delivers up to 2X more performance on top of A30’s other inference performance gains.
 
NVIDIA’s market-leading AI performance was demonstrated in MLPerf Inference. Combined with NVIDIA Triton™ Inference Server, which easily deploys AI at scale, NVIDIA Tesla A30 brings this groundbreaking performance to every enterprise.
 
 

 

NVIDIA A30 Data Center Solution

NVIDIA Ampere GPU architecture

24GB HBM2 Memory

Max. Power Consumption: 165W

Interconnect Bus:
PCIe Gen. 4: 64GB/s
3rd NVLink: 200GB/s

Thermal Solution: Passive

Multi-instance GPU (MIG):
4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB

Virtual GPU (vGPU) software support

 
 

 

NVIDIA A30 Data Center Solution Technical Specifications
 

GPU Architecture

NVIDIA Ampere

FP64

5.2 teraFLOPS

FP64 Tensor Core

10.3 teraFLOPS

FP32

10.3 teraFLOPS

TF32 Tensor Core

82 teraFLOPS | 165 teraFLOPS*

BFLOAT16 Tensor Core

165 teraFLOPS | 330 teraFLOPS*

FP16 Tensor Core

165 teraFLOPS | 330 teraFLOPS*

INT8 Tensor Core

330 TOPS | 661 TOPS*

INT4 Tensor Core

661 TOPS | 1321 TOPS*

Media engines

1 optical flow accelerator (OFA)
1 JPEG decoder (NVJPEG)
4 video decoders (NVDEC)

GPU memory

24GB HBM2

GPU Memory Bandwidth

933 GB/s

Interconnect

PCIe Gen4: 64GB/s

Max thermal design power (TDP)

165W

Form Factor

Dual-slot, full-height, full-length (FHFL)

Multi-Instance GPU (MIG)

4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB

Virtual GPU (vGPU) software support

NVIDIA AI Enterprise for VMware
NVIDIA Virtual Compute Server

 

 

 

 

 

 

 

 

 

 

 

 

 



























Company Details

Bronze Gleitlager

,

Bronze Sleeve Bushings

 and 

Graphite Plugged Bushings

 from Quality China Factory
  • Business Type:

    Manufacturer,Agent,Importer,Exporter,Trading Company

  • Year Established:

    2009

  • Total Annual:

    1500000-2000000

  • Employee Number:

    10~30

  • Ecer Certification:

    Active Member

Beijing Plink-Ai is a high-tech company integrating research and development, production and sales and one of the industry and civil intelligence device wholly solution supplier. Founded in 2009, Beijing Plink-AI always persists on ‘creation+efficiency+Intelligence’ core theory, focuses ... Beijing Plink-Ai is a high-tech company integrating research and development, production and sales and one of the industry and civil intelligence device wholly solution supplier. Founded in 2009, Beijing Plink-AI always persists on ‘creation+efficiency+Intelligence’ core theory, focuses ...

+ Read More

Get in touch with us

  • Reach Us
  • Beijing Plink AI Technology Co., Ltd
  • C1106,Jinyu Jiahua Building,Shangdi 3rd Street, Haidian District, Beijing 100085, P.R.China
  • https://www.ipcembedded.com/

Leave a Message, we will call you back quickly!

Email

Check your email

Phone Number

Check your phone number

Requirement Details

Your message must be between 20-3,000 characters!

Submit Requirement