Top Five Announcements from NVIDIA at GTC 2019 Today

NVIDIA CEO Jensen Huang's keynote is available to watch. Here are the top five announcements from Huang's keynote, which ended a few hours ago.

1.) AI hardware is finally affordable. NVIDIA unveiled the Jetson Nano GPU, which costs USD 99.

The small CUDA-X AI computer has 472 GFLOPS of computing power for AI workloads and only consumes 5 watts. The Jetson Nano is available in two versions: a USD 99 devkit for makers and developers and USD 129 production-ready module for companies and organizations interested in producing mass-market edge systems. (Image courtesy of NVIDIA.)

The devkit is bundled with support for full desktop Linux, is compatible compatibility with a fair amount of accessories and peripheral hardware. NVIDIA is also providing "ready-to-use projects" as well as tutorials for the uninitiated. If you're stuck, you can go to the NVIDIA Jetson developer forum for help with technical questions.

Availability: Mostly immediately. The NVIDIA Jetson Nano Developer Kit is available now for $99. However, the Jetson Nano module will start shipping in June. Both will be sold through NVIDIA's main global distributors. If you're an interested maker, you can buy developer kits from Seeed Studio and SparkFun.

2.) NVIDIA is launching a GPU-Accelerated Data Science Workstation.

NVIDIA-powered workstations for data science are built from a reference architecture composed of two high-end ​NVIDIA Quadro RTX GPUs​ and ​NVIDIA CUDA-X AI​ accelerated data science software, such as ​RAPIDS, TensorFlow, PyTorch and Caffe. CUDA-X AI is ​a collection of libraries that enable modern computing applications to leverage NVIDIA’s GPU-accelerated computing platform. (Image courtesy of NVIDIA.)
Availability: Immediately. 

NVIDIA-powered systems for data scientists are available immediately from global workstation providers such as Dell, HP and Lenovo as well as regional system builders, including AMAX, APY, Azken Muga, BOXX, CADNetwork, Carri, Colfax, Delta, EXXACT, Microway, Scan, Sysgen and Thinkmate.

3.) NVIDIA unveiled new RTX Blade Servers

The latest RTX Server configuration unveiled at GPU Technology Conference today is comprised of 1,280 Turing GPUs on 32 RTX blade servers, which offer a monumental leap in cloud-rendered density, efficiency and scalability. Each RTX blade server packs 40 GPUs into an 8U space and can be shared by multiple users with NVIDIA GRID vGaming or container software. Mellanox technology is used as the backbone storage and networking interconnect to deliver the apps and updates instantly to thousands of concurrent users. (Image courtesy of NVIDIA.)

NVIDIA RTX Servers which include fully optimized software stacks available for Optix RTX rendering, gaming, VR and AR, and professional visualization applications can now deliver cinematic-quality graphics enhanced by ray tracing for far less than just the cost of electricity for a CPU-based rendering cluster with the same performance.

Availability: Mostly immediately. 2U and 4U RTX servers are available from our OEM partners today. The new 8U RTX blade server will initially be available from NVIDIA in Q3.

4.) NVIDIA's DRIVE Constellation for Autonomous Vehicles is now available to the public.

DRIVE Constellation is an open platform into which ecosystem partners can integrate their environment models, vehicle models, sensor models and traffic scenarios. By incorporating datasets from the broader simulation ecosystem, the platform can generate comprehensive, diverse and complex testing environments. (Image courtesy of NVIDIA.)
First introduced at GTC last year, DRIVE Constellation is a data center solution comprised of two side-by-side servers. One server — DRIVE Constellation Simulator — uses NVIDIA GPUs running DRIVE Sim software to generate the sensor output from the virtual car driving in a virtual world. The other server — DRIVE Constellation Vehicle — contains the DRIVE AGX Pegasus AI 
car computer, which processes the simulated sensor data.

Availability: Immediately.
 

5.) Amazon Web Services will be integrating NVIDIA T4 GPUs to power a plethora of AI services.

Amazon Web Services today announced that its new Amazon Elastic Compute Cloud (EC2) G4 instances featuring NVIDIA T4 Tensor Core GPUs will be available in the coming weeks. The new G4 instances will provide AWS customers with a versatile platform to cost-efficiently deploy a wide range of AI services. Through AWS Marketplace, customers will be able to pair the G4 instances with NVIDIA GPU acceleration software, including NVIDIA CUDA-X AI libraries for accelerating deep learning, machine learning and data analytics.

Data science is driving decision-making in computer vision, traditional machine learning and deep learning. Observing real-time flows of data and making predictive data models that are adaptive and reactive to changes as they happen is becoming more critical with each passing year. NVIDIA has created CUDA X, a library for developers to optimize frameworks and CUDA X AI, part of the same library for optimizing AI frameworks like PyTorch, TensorCore among others.

NVIDIA is marketing the T4 GPU as a universal GPU for AI workflows and graphics workflows in enterprise. T4 is coming to major OEMs, Dell, HP, Cisco, Lenovo, Fujitsu, Lenovo servers. NVIDIA worked with each company to test and optimize their server stacks. (Image courtesy of NVIDIA.)

Availability: Not immediately, but in a few weeks.