Systems | Development | Analytics | API | Testing

Latest Posts

Empower Your Cyber Defenders with Real-Time Analytics Author: Carolyn Duby, Field CTO

Today, cyber defenders face an unprecedented set of challenges as they work to secure and protect their organizations. In fact, according to the Identity Theft Resource Center (ITRC) Annual Data Breach Report, there were 2,365 cyber attacks in 2023 with more than 300 million victims, and a 72% increase in data breaches since 2021. The constant barrage of increasingly sophisticated cyberattacks has left many professionals feeling overwhelmed and burned out.

Enable Image Analysis with Cloudera's New Accelerator for Machine Learning Projects Based on Anthropic Claude

Enterprise organizations collect massive volumes of unstructured data, such as images, handwritten text, documents, and more. They also still capture much of this data through manual processes. The way to leverage this for business insight is to digitize that data. One of the biggest challenges with digitizing the output of these manual processes is transforming this unstructured data into something that can actually deliver actionable insights.

Octopai Acquisition Enhances Metadata Management to Trust Data Across Entire Data Estate

We are excited to announce the acquisition of Octopai, a leading data lineage and catalog platform that provides data discovery and governance for enterprises to enhance their data-driven decision making. Cloudera’s mission since its inception has been to empower organizations to transform all their data to deliver trusted, valuable, and predictive insights.

Introducing Cloudera Fine Tuning Studio for Training, Evaluating, and Deploying LLMs with Cloudera AI

Large Language Models (LLMs) will be at the core of many groundbreaking AI solutions for enterprise organizations. Here are just a few examples of the benefits of using LLMs in the enterprise for both internal and external use cases: Optimize Costs. LLMs deployed as customer-facing chatbots can respond to frequently asked questions and simple queries.

Unlocking Faster Insights: How Cloudera and Cohere can deliver Smarter Document Analysis

Today we are excited to announce the release of a new Cloudera Accelerator for Machine Learning (ML) Projects (AMP) for PDF document analysis, “Document Analysis with Command R and FAISS”, leveraging Cohere’s Command R Large Language Model (LLM), the Cohere Toolkit for retrieval augmented generation (RAG) applications, and Facebook’s AI Similarity Search (FAISS).

Cloudera and Snowflake Partner to Deliver the Most Comprehensive Open Data Lakehouse

In August, we wrote about how in a future where distributed data architectures are inevitable, unifying and managing operational and business metadata is critical to successfully maximizing the value of data, analytics, and AI. One of the most important innovations in data management is open table formats, specifically Apache Iceberg, which fundamentally transforms the way data teams manage operational metadata in the data lake.

The Evolution of LLMOps: Adapting MLOps for GenAI

In recent years, machine learning operations (MLOps) have become the standard practice for developing, deploying, and managing machine learning models. MLOps standardizes processes and workflows for faster, scalable, and risk-free model deployment, centralizing model management, automating CI/CD for deployment, providing continuous monitoring, and ensuring governance and release best practices.

Cloudera Lakehouse Optimizer Makes it Easier Than Ever to Deliver High-Performance Iceberg Tables

The open data lakehouse is quickly becoming the standard architecture for unified multifunction analytics on large volumes of data. It combines the flexibility and scalability of data lake storage with the data analytics, data governance, and data management functionality of the data warehouse.

Deploy and Scale AI Applications With Cloudera AI Inference Service

We are thrilled to announce the general availability of the Cloudera AI Inference service, powered by NVIDIA NIM microservices, part of the NVIDIA AI Enterprise platform, to accelerate generative AI deployments for enterprises. This service supports a range of optimized AI models, enabling seamless and scalable AI inference.

Streamlining Generative AI Deployment with New Accelerators

The journey from a great idea for a Generative AI use case to deploying it in a production environment often resembles navigating a maze. Every turn presents new challenges—whether it’s technical hurdles, security concerns, or shifting priorities—that can stall progress or even force you to start over.