Optimizing Google’s Gemma for All NVIDIA AI Platforms

Photo of author

By Car Brand Experts



nv google blog header preview 1280x680 1

NVIDIA, in collaboration with Google, in the present day launched optimizations throughout all NVIDIA AI platforms for Gemma — Google’s state-of-the-art new light-weight 2 billion– and 7 billion-parameter open language fashions that may be run wherever, lowering prices and rushing revolutionary work for domain-specific use circumstances.

Groups from the businesses labored intently collectively to speed up the efficiency of Gemma — constructed from the identical analysis and expertise used to create the Gemini fashions — with NVIDIA TensorRT-LLM, an open-source library for optimizing giant language mannequin inference, when working on NVIDIA GPUs within the information heart, within the cloud and on PCs with NVIDIA RTX GPUs.

This enables builders to focus on the put in base of over 100 million NVIDIA RTX GPUs out there in high-performance AI PCs globally.

Builders also can run Gemma on NVIDIA GPUs within the cloud, together with on Google Cloud’s A3 situations primarily based on the H100 Tensor Core GPU and shortly, NVIDIA’s H200 Tensor Core GPUs — that includes 141GB of HBM3e reminiscence at 4.8 terabytes per second — which Google will deploy this 12 months.

Enterprise builders can moreover reap the benefits of NVIDIA’s wealthy ecosystem of instruments — together with NVIDIA AI Enterprise with the NeMo framework and TensorRT-LLM — to fine-tune Gemma and deploy the optimized mannequin of their manufacturing utility.

Study extra about how TensorRT-LLM is revving up inference for Gemma, together with further data for builders. This contains a number of mannequin checkpoints of Gemma and the FP8-quantized model of the mannequin, all optimized with TensorRT-LLM.

Expertise Gemma 2B and Gemma 7B instantly out of your browser on the NVIDIA AI Playground.

Gemma Coming to Chat With RTX

Including assist for Gemma quickly is Chat with RTX, an NVIDIA tech demo that makes use of retrieval-augmented technology and TensorRT-LLM software program to provide customers generative AI capabilities on their native, RTX-powered Home windows PCs.

The Chat with RTX lets customers personalize a chatbot with their very own information by simply connecting native information on a PC to a big language mannequin.

Because the mannequin runs domestically, it offers outcomes quick, and consumer information stays on the system. Fairly than counting on cloud-based LLM companies, Chat with RTX lets customers course of delicate information on a neighborhood PC with out the necessity to share it with a 3rd social gathering or have an web connection.

Leave a Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

Pin It on Pinterest

Share This

Share This

Share this post with your friends!