Cloudera has announced an expansion of Cloudera AI Inference and Cloudera Data Warehouse with Trino to on-premises environments, with added AI and analytics capabilities in Cloudera Data Visualization. Cloudera says the updates are aimed at customers that want to run AI and analytics inside their own data centers, with governed access to data across cloud, edge, and data center deployments.
Cloudera AI Inference, powered by NVIDIA technology, is now available on premises. Cloudera says customers can deploy and scale “any AI model,” including NVIDIA Nemotron open models, for workloads such as large language models, fraud detection, computer vision, and voice. The company says the offering is accelerated by the NVIDIA AI stack, NVIDIA Blackwell GPUs, NVIDIA Dynamo-Triton Inference Server, and NVIDIA NIM microservices for high-performance, scalable model serving, with an emphasis on secure, governed deployment at enterprise scale and predictable economics versus cloud-based inference costs.
Cloudera Data Warehouse with Trino is also now available in data center environments. Cloudera says it provides centralized security, governance, and observability across the “entire data estate,” and is designed to accelerate access to insights. The company also claims integrated AI-powered analytics and visualization to turn complex data into “actionable outcomes” while maintaining security, compliance, and operational control.
For Cloudera Data Visualization, the company has added AI annotation to generate summaries and contextual insights for charts and visuals; “resilient AI features” intended to handle transient issues and provide usage analytics; and AI query logging and traceability that logs message ID, timestamp, and question. Cloudera also says it has simplified admin management by enabling admin role assignment via updated configuration parameters, streamlining single sign-on (SSO)-based setup by removing hard-coded credentials and manual user promotion.
“Our collaboration with Cloudera enables customers to deploy and scale AI inference using NVIDIA Blackwell GPUs, Dynamo-Triton and NIM microservices, delivering control, predictable economics, and data-center efficiency,” said Pat Lee, vice president, strategic enterprise partnerships, NVIDIA.
Source: Cloudera







