Systems | Development | Analytics | API | Testing

Latest Videos

Managing Costs for Spark on Amazon EMR

Are you looking to optimize costs and resource usage for your Spark jobs on Amazon EMR? Then this is the webinar for you. Overallocating resources, such as memory, is a common fault when setting up Spark jobs. And for Spark jobs running on EMR, adding resources is a click away - but it’s an expensive click, so cost management is critical. Unravel Data is our AI-enabled observability platform for Spark jobs on Amazon EMR and other Big Data technologies. Unravel helps you right-size memory allocations, choose the right number of workers, and map your cluster needs to available instance types.

Managing Costs for Spark on Databricks Webinar

Are you looking to optimize costs and resource usage for your Spark jobs on Databricks? Then this is the webinar for you. Overallocating resources, such as memory, is a common fault when setting up Spark jobs. And for Spark jobs running on Databricks, adding resources is a click away - but it’s an expensive click, so cost management is critical.

Managing Cost & Resources Usage for Spark

Spark jobs require resources - and those resources? They can be pricey. If you're looking to speed up completion times, optimize costs, and reduce resource usage for your Spark jobs, this is the webinar for you.For Spark jobs running on-premises, optimizing resource usage is key. For Spark jobs running in the cloud, for example on Amazon EMR or Databricks, adding resources is a click away - but it’s an expensive click, so cost management is critical.

Troubleshooting Databricks

The popularity of Databricks is rocketing skyward, and it is now the leading multi-cloud platform for Spark and analytics workloads, offering fully managed Spark clusters in the cloud. Databricks is fast and organizations generally refactor their applications when moving them to Databricks. The result is strong performance. However, as usage of Databricks grows, so does the importance of reliability for Databricks jobs - especially big data jobs such as Spark workloads. But information you need for troubleshooting is scattered across multiple, voluminous log files.

Effective Cost and Performance Management Amazon EMR Webinar Recording

Amazon EMR is a go-to platform for those who want all the power of Hadoop and Spark in the cloud. However, cost and performance trade-offs can reduce the advantages of EMR over alternatives. Lack of visibility into the root cause of problems, right-sizing options, and cost allocation can add confusion and frustration for EMR users. Unravel Data gives you visibility into the minute-to-minute operations of your workloads on EMR. Get root cause analysis (RCA) of workload breakdowns and slowdowns; AI-powered recommendations; and proactive fixes for many problems. With Unravel Data, you can meet and beat your SLAs, saving thousands - even millions - of dollars per year in the process.

Operationalize Your Insights - The Self-Service Data Roadmap, Session 4 of 4

In this webinar, Unravel CDO and VP Engineering Sandeep Uttamchandani describes the fourth and final step for any large, data-driven project: the Operationalize phase. You've found your data (Discover phase), readied it for processing (Prep phase), and built out your processing logic and machine learning model(s) (Build phase). Now you need to Operationalize all your work to data as a live project, in production.