Systems | Development | Analytics | API | Testing

Latest Posts

Want to Build a Responsive and Intelligent Data Pipeline? Focus on Lifecycle

Today, enterprises need to collect and analyze more and more data to drive greater business insight and improve customer experiences. To process this data, technology stacks have evolved to include cloud data warehouses and data lakes, big data processing, serverless computing, containers, machine learning, and more.

5 best practices to innovate at speed in the Cloud: Tip #5 Accelerate data delivery to third party applications and teams through APIs

Billions of times each day, application programming interfaces (APIs) facilitate the transfer of data between people and systems, serving as the fabric that connects businesses with customers, suppliers, and employees. Having the right API strategy in place can make the difference between success and failure when it comes to utilizing APIs to deliver results, reduce response times, and improving process efficiency.

5 best practices to innovate at speed in the Cloud: Tip #4 Perform faster root cause analysis thanks to data lineage

Like any supply chain that aspires to be lean and frictionless, data chains need transparency and traceability. There is a need for automated data lineage to understand where data comes from, where does it go, how it is processed and who consumes it. There is also a need for whistle blowers for data quality or data protection and for impact analysis whenever change happens.

Birds migrate. But why do data warehouses?

Well, let’s be specific here. Birds migrate either north or south. Data warehouses are only going in one direction. Up, to the cloud. It’s a common trend we’re seeing across every vertical and across every region. Companies are moving their existing data warehouses to cloud environments like Amazon Redshift. And more often than not –unlike their feather counterparts– ­once they migrate to the cloud, they never come back. But why? Simply put, it just makes sense.

5 best practices to innovate at speed in the Cloud: Tip #3 Enable access to and use of self-service applications

Data professionals face an efficiency gap; they spend too much time to get access to the data they need and then put it into the appropriate business context. The capacity of delivering trusted data to business experts at the point of need is critical if you want to liberate data value within your company.

Quick wins for modern analytics projects with Amazon Redshift and Stitch Data Loader

It’s no secret that the cloud data warehouse space is exploding. Driven by the need for on-demand, performant data warehousing solutions, businesses are turning to public cloud providers to modernize their analytics infrastructure and help them make better business decisions. Among the leading data warehouse options from the public cloud providers is Amazon Redshift. Redshift offers a petabyte-scale, fully managed data warehouse service in the cloud.

Speed and Trust with Azure Synapse Analytics

As a Microsoft partner, we’re excited by the announcement of the Azure Synapse Analytics platform. Why? Because it furthers the ability of businesses to leverage data-driven insights and decision making at all levels in an organization. (And we love that!) Together, our joint customers are already leveraging data in amazing ways to tackle everything from creating customer 360 views to reducing project times for data analytics from 6 months to 6 weeks.

How to use your data skills to keep a step ahead

While the impetus for transforming to a data-driven culture needs to come from the top of the organisation, all levels of the business should participate in learning new data skills. Assuring data availability and integrity must be a team sport in modern data-centric businesses, rather than being the responsibility of one individual or department. Everyone must buy in and be held accountable throughout the process.

Experience the magic of shuffling columns in Talend Dynamic Schema

If you are a magician specialized in Talend magic, we always hear a key word called Dynamic ingestion of data from various sources to target systems instead of creating individual Talend job for each data flow. In this blog, we will do a quick recap of the concept of Dynamic schema and how we can reorder or shuffle columns when we are employing Dynamic schema in ingestion operations.