The Importance of Expedited Data Handling for AI Advancement in Every Sector

Photo of author

By Car Brand Experts


industries corp blog data processing 1280x680 1

Within various domains, AI is revolutionizing progress through machine-driven computation. In the financial sector, AI is being employed by bankers to swiftly identify fraudulent activities and safeguard accounts, while telecommunication companies are enhancing their networks to offer top-quality services. Scientists are crafting innovative treatments for rare illnesses, utility firms are constructing cleaner and more reliable energy infrastructure, and automotive manufacturers are enhancing the safety and accessibility of self-driving vehicles.

The foundation of prominent AI applications lies in data. Efficient and accurate AI models necessitate training with vast datasets. Enterprises aspiring to harness the potential of AI must establish a data conduit encompassing data extraction from different sources, standardization, and efficient storage.

Data scientists engage in refining datasets through numerous trials to optimize AI models for supreme performance in practical settings. Applications such as voice assistants and personalized recommendation systems require swift handling of large data volumes to ensure real-time functionality.

As AI models grow in complexity and start to handle a variety of data formats like text, audio, images, and videos, the demand for rapid data handling becomes increasingly vital. Organizations sticking to outdated CPU-based computing are facing hindrances in innovation and performance due to data congestion, rising data center expenses, and lacking computing capabilities.

Many enterprises are turning to accelerated computing to embed AI in their operations. This approach harnesses GPUs, specialized hardware, software, and parallel computing methodologies to amplify computing efficiency by up to 150 times and improve energy conservation by up to 42 times.

Prominent companies spanning multiple industries are leveraging accelerated data processing to spearhead revolutionary AI undertakings.

Finance Firms Detect Fraud Instantaneously

Financial institutions encounter a substantial hurdle in identifying fraud patterns due to the extensive transactional data that necessitates rapid scrutiny. Moreover, the lack of labeled data for verified instances of fraud poses a challenge in training AI models. Traditional data processing pipelines lack the acceleration requisite to manage the substantial data volumes linked with fraud detection, leading to sluggish processing times that impede real-time data scrutiny and fraud identification capabilities.

To tackle these obstacles, American Express, handling over 8 billion transactions annually, employs accelerated computing to train and deploy Long Short-Term Memory (LSTM) models. These models excel at sequential analysis and anomaly detection, adapting and learning from fresh data, making them ideal for combating fraud.

By leveraging parallel computing approaches on GPUs, American Express significantly accelerates the training of its LSTM models. GPUs also empower live models to handle extensive transactional data for high-performance computations, enabling real-time fraud detection.

The system operates with a latency of two milliseconds to bolster customer and merchant protection, delivering a 50x enhancement compared to CPU-based setups. By amalgamating the accelerated LSTM deep neural network with its prevailing methods, American Express has boosted fraud detection precision by up to 6% in specific segments.

Financial entities can also utilize accelerated computing to diminish data processing expenses. Confirming the potential to cut cloud expenses by up to 70% for big data processing and AI applications, PayPal runs data-intensive Spark3 workloads on NVIDIA GPUs.

By processing data in a more efficient manner, financial organizations can swiftly spot fraud, facilitating swift decision-making without disrupting transaction flows and minimizing financial risks.

Telecom Firms Simplify Intricate Routing Operations

Telecommunication providers amass vast data from diverse sources, spanning network devices, client engagements, billing systems, and network performance and maintenance data.

Managing national networks handling hundreds of petabytes daily necessitates advanced technician routing to ensure service provision. To optimize technician dispatch, advanced routing engines execute myriad computations, considering factors like weather, technician competencies, customer requests, and fleet distribution. Success in these operations hinges on meticulous data preparation and ample computing capacity.

Deploying one of the nation’s largest field dispatch teams for customer service, AT&T enhances data-rich routing operations with NVIDIA cuOpt, which relies on heuristics, metaheuristics, and optimizations to solve complex vehicle routing problems.

In initial trials, cuOpt produced routing solutions within 10 seconds, slashing cloud expenses by 90% and enabling technicians to complete more service calls daily. NVIDIA RAPIDS, a suite of software libraries accelerating data science and analytics pipelines, further boosts cuOpt, facilitating the incorporation of local search heuristics and metaheuristics such as Tabu search for continuous route optimization.

AT&T has embraced NVIDIA RAPIDS Accelerator for Apache Spark to enhance the efficiency of Spark-based AI and data pipelines. This has helped the company enhance operational efficiency, from AI model training to network maintenance, customer retention, and fraud prevention. With RAPIDS Accelerator, AT&T is reducing its cloud computing expenditure for specific workloads while augmenting performance and shrinking its carbon footprint.

Swift data pipelines and processing will be essential as telecom firms aim to enhance operational efficiency while furnishing top-tier service quality.

Medical Researchers Condense Drug Discovery Timelines

With researchers utilizing technology to study roughly 25,000 human genome genes and their connection to diseases, an enormous amount of medical data and research papers have emerged. Biomedical researchers rely on these papers to narrow down the focus for new treatments. However, reviewing such an extensive and expanding body of pertinent research has become a herculean task.

AstraZeneca, a prominent pharmaceutical firm, devised a Biological Insights Knowledge Graph (BIKG) to aid scientists across the drug discovery journey, from literature reviews to hit screening, target identification, and more. This graph amalgamates public and internal databases with data from scientific literature, modeling between 10 million and 1 billion intricate biological relationships.

BIKG has been instrumental in gene ranking, assisting scientists in hypothesizing promising targets for novel disease treatments.During the NVIDIA GTC event, the team from AstraZeneca showcased a successful endeavor in identifying genes associated with resistance in lung cancer treatments.

To streamline the identification of potential genes, a collaboration between data experts and biological scientists was formed to establish the criteria and characteristics of genes ideal for targeting in treatment development. They imparted knowledge to a machine learning algorithm to scour the BIKG databases for genes that exhibited the specified features documented in literature as treatable. Leveraging NVIDIA RAPIDS for quicker computations, the team condensed the initial gene set from 3,000 to a mere 40 target genes, a task that previously consumed months but now takes mere seconds.

By integrating accelerated computing and AI into drug development, pharmaceutical firms and researchers can now harness the vast data resources accumulating in the medical realm to expedite the development of innovative drugs efficiently and securely, thereby making a life-saving impact.

Electricity Providers Pave the Way for Sustainable Energy Sources

There has been a substantial drive towards adopting carbon-neutral energy sources within the energy industry. With the costs associated with harnessing renewable sources such as solar energy significantly plummeting over the past decade, there is a golden opportunity to make substantial progress towards a future powered by clean energy.

Nevertheless, the transition towards integrating clean energy from wind and solar farms and household batteries has brought forth new intricacies in managing energy grids. With the diversification of energy infrastructure and the need to accommodate bidirectional power flows, grid management has become more data-centric. Modern smart grids are now imperative to oversee high-voltage sections for electric vehicle charging and oversee the availability of distributed energy storage sources while adapting to usage fluctuations across the network.

Utilidata, a distinguished grid-edge software firm, has partnered with NVIDIA to craft a distributed AI platform named Karman for the grid edge, utilizing a tailor-made NVIDIA Jetson Orin edge AI module. This custom module and platform, embedded within electricity meters, convert each meter into a data hub and control unit, capable of managing thousands of data points per second.

Karman handles real-time, high-resolution data from meters positioned at the network’s periphery, empowering electricity providers to garner detailed insights into grid conditions, forecast usage, and seamlessly incorporate distributed energy resources in seconds rather than minutes or hours. Furthermore, with inference models on the edge devices, network operators can foresee and promptly identify line disruptions to anticipate potential outages and undertake preventive maintenance to enhance grid reliability.

Through the fusion of AI and accelerated data analysis, Karman aids electricity providers in transforming existing infrastructure into efficient smart grids, enabling tailored, localized electricity distribution to meet fluctuating demand patterns without necessitating extensive physical infrastructure upgrades, thereby facilitating a more economical modernization of the grid.

Automobile Manufacturers Facilitate Safer, More Accessible Autonomous Vehicles

As automotive enterprises strive for complete self-driving capabilities, vehicles must possess the ability to detect objects and navigate instantly. This necessitates swift data processing operations, including feeding real-time data from cameras, lidar, radar, and GPS into AI models that make navigation decisions to ensure road safety.

The workflow for autonomous driving is intricate, involving multiple AI models alongside requisite preprocessing and postprocessing steps. Traditionally, these steps were conducted on the client side using CPUs. However, this could lead to substantial bottlenecks in processing speeds, which is an unacceptable drawback for an application where rapid processing ensures safety.

To augment the efficiency of autonomous driving workflows, electric vehicle producer NIO integrated the NVIDIA Triton Inference Server into its inference pipeline. The NVIDIA Triton is open-source, multi-framework, inference-serving software. By centralizing data processing tasks, NIO managed to reduce latency by up to 6x in key areas and boost overall data throughput by up to 5x.

NIO’s emphasis on GPUs made it simpler to update and deploy novel AI models without the need to modify anything on the vehicles themselves. Furthermore, the company was able to use multiple AI models concurrently on the same set of images without the requirement to shuttle data back and forth over a network, thus saving on data transfer expenses and improving performance.

Through accelerated data processing utilization, developers of autonomous vehicle software guarantee they can attain a high-performance standard to avoid traffic accidents, cut transportation costs, and enhance mobility for users.

Retailers Enhance Demand Prediction Capabilities

In the fast-evolving retail domain, the capacity to swiftly process and analyze data plays a crucial role in adapting inventory levels, tailoring customer interactions, and optimizing pricing strategies on the go. The larger the retailer and the more diverse its product range, the more convoluted and compute-intensive its data operations become.

Walmart, the world’s largest retailer, turned to accelerated computing to dramatically boost the accuracy of forecasting for 500 million item-by-store combinations across 4,500 stores.

As Walmart’s data science unit developed more powerful machine learning algorithms to tackle this substantial forecasting challenge, the existing computing framework started to stumble, with tasks failing to finish or yielding inaccurate outcomes. The company observed that data scientists had to strip features from algorithms just to ensure they reached completion.

To enhance its forecasting operations, Walmart began employing NVIDIA GPUs and RAPIDs. The company now employs a forecasting model incorporating 350 data features to predict sales across all product categories. These features encapsulate sales data, promotional events, and external factors like weather conditions and significant events such as the Super Bowl that influence demand.

The deployment of advanced models enabled Walmart to elevate forecast accuracy from 94% to 97%, while averting an estimated $100 million in perishable produce wastage and reducing stockouts and clearance events. GPUs also ran models 100 times faster, with tasks completed within just four hours – an operation that would have spanned several weeks in a CPU environment.

By transitioning data-heavy tasks to GPUs and accelerated computing, retailers can diminish both their expenses and their ecological footprint while delivering the most appropriate choices and competitive pricing to consumers.

Government Sector Enhances Disaster Planning Capabilities

Drones and satellites capture copious aerial image data utilized by public and private sectors to forecast weather patterns, monitor animal migrations, and track environmental alterations. This data is invaluable for research and planning, enabling more informed decisions in domains like agriculture, disaster management, and the fight against climate change. Nevertheless, this imagery’s value may be constrained if it lacks specific geographical metadata.

A governmental entity collaborating with NVIDIA sought a method to automatically pinpoint the location of images lacking geospatial metadata, crucial for missions such as search and rescue operations, response to natural calamities, and environmental monitoring. Nevertheless, pinpointing a small area within a broader region using an aerial image devoid of metadata is immensely challenging – comparable to finding a needle in a haystack. Algorithms designed to assist with geolocation must contend with variations in image illumination and disparities due to images being captured at varying times, dates, and angles.

To identify aerial images without geolocation tags, NVIDIA, Booz Allen, and the governmental agency teamed up to devise a solution employing computer vision algorithms to glean details from image pixel data to tackle the image similarity search dilemma.

While striving to resolve this issue, a solutions architect from NVIDIA initially utilized a Python-based application. Initially reliant on CPUs, the processing took over 24 hours. GPUs accelerated this to mere minutes, executing thousands of data operations in parallel, as opposed to merely a handful of operations on a CPU. By transitioning the application code to CuPy, an open-source GPU-accelerated library, the application experienced a remarkable 1.8-million-fold acceleration, delivering outcomes in 67 microseconds.

With a solution capable of processing vast image datasets in just minutes, organizations can access the critical information required to respond swiftly and effectively to emergencies, proactively plan, potentially saving lives, and safeguarding the environment.

Advance AI Endeavors and Achieve Business Objectives

Enterprises leveraging accelerated computing for data processing are propelling AI initiatives forward and positioning themselves to innovate and excel beyond their competitors.

Accelerated computing manages substantial datasets more effectively, facilitates quicker model training, selection of optimum algorithms, and delivers more precise outcomes for active AI solutions.

Businesses employing it can attain superior price-performance ratios compared to conventional CPU-based systems and enhance their ability to deliver exceptional results and experiences to consumers, team members, and associates.

Discover how accelerated computing aids organizations in achieving AI goals and driving innovation.

[ad_2]

Leave a Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

Pin It on Pinterest

Share This

Share This

Share this post with your friends!