Systems | Development | Analytics | API | Testing

Latest Posts

Healthcare leader uses AI insights to boost data pipeline efficiency

One of the largest health insurance providers in the United States uses Unravel to ensure that its business-critical data applications are optimized for performance, reliability, and cost in its development environment—before they go live in production. Data and data-driven statistical analysis have always been at the core of health insurance.

AI-Driven Observability for Snowflake

Performance. Reliability. Cost-effectiveness. Unravel is a data observability platform that provides cost intelligence, warehouse optimization, query optimization, and automated alerting and actions for high-volume users of the Snowflake Data Cloud. Unravel leverages AI and automation to deliver realtime, user-level and query-level cost reporting, code-level optimization recommendations, and automated spend controls to empower and unify DataOps and FinOps teams.

Logistics giant optimizes cloud data costs up front at speed & scale

One of the world’s largest logistics companies leverages automation and AI to empower every individual data engineer with self-service capability to optimize their jobs for performance and cost. The company was able to cut its cloud data costs by 70% in six months—and keep them down with automated 360° cost visibility, prescriptive guidance, and guardrails for its 3,000 data engineers across the globe.

The Modern Data Ecosystem: Choose the Right Instance

There are several ways to optimize cloud storage, depending on your specific needs and circumstances. Here are some general tips that can help: Overall, optimizing cloud storage requires careful planning, monitoring, and management. By following these tips, you can reduce your storage costs, improve your data management, and get the most out of your cloud storage investment.

DBS Discusses Data+FinOps for Banking

DBS Bank Head of Automation, Infrastructure for DBS Big Data, AI and Analytics Luis Carlos Cruz Huertas has a 1-on-1 discussion with Unravel CEO and Co-founder Kunal Agarwal about the convergence of DataOps and FinOps. The discussion, Leading Cultural Change for Data Efficiency, Agility, and Cost Optimization, was held at a recent Untap event in New York and revolves around best practices, lessons learned, and insights on.

DataOps Resiliency: Tracking Down Toxic Workloads

In the first three articles in this four-post series, my colleague Jason English and I explored DataOps observability, the connection between DevOps and DataOps, and data-centric FinOps best practices. In this concluding article in the series, I’ll explore DataOps resiliency – not simply how to prevent data-related problems, but also how to recover from them quickly, ideally without impacting the business and its customers.

Solving key challenges in the ML lifecycle with Unravel and Databricks Model Serving

Machine learning (ML) enables organizations to extract more value from their data than ever before. Companies who successfully deploy ML models into production are able to leverage that data value at a faster pace than ever before. But deploying ML models requires a number of key steps, each fraught with challenges.

DataFinOps: Holding individuals accountable for their own cloud data costs

Most organizations spend at least 37% (sometimes over 50%) more than they need to on their cloud data workloads. A lot of costs are incurred down at the individual job level, and this is usually where there’s the biggest chunk of overspending. Two of the biggest culprits are oversized resources and inefficient code. But for an organization running 10,000s or 100,000s of jobs, finding and fixing bad code or right-sizing resources is shoveling sand against the tide.