Systems | Development | Analytics | API | Testing

Data Science vs. Data Engineering: What You Need to Know

According to The Economist, “the world’s most valuable resource is no longer oil, but data.” Despite the value of enterprise data, much has been written about the so-called “data science shortage”: the supposed lack of professionals with knowledge of how to use and manipulate big data. A 2018 study by LinkedIn estimated that there were more than 151,000 unfilled jobs in the U.S. requiring data science skills.

What is Low Code?

Businesses are increasingly demanding new software solutions that are quick, efficient, and user friendly. Low code is a way to automate several steps of the application process while still providing rapid delivery. In simplest terms, low code is a way of building processes and applications with very little coding. There are several aspects of this type of software development you need to understand when fully answering the question, what is low code?

How to Build Real-Time Feature Engineering with a Feature Store

Simplifying feature engineering for building real-time ML pipelines might just be the next holy grail of data science. It’s incredibly difficult and highly complex, but it’s also desperately needed for multiple use cases across dozens of industries. Currently, feature engineering is siloed between data scientists, who search for and create the features, and data engineers, who rewrite the code for a production environment.

Enabling The Full ML Lifecycle For Scaling AI Use Cases

When it comes to machine learning (ML) in the enterprise, there are many misconceptions about what it actually takes to effectively employ machine learning models and scale AI use cases. When many businesses start their journey into ML and AI, it’s common to place a lot of energy and focus on the coding and data science algorithms themselves.

Spark APM - What is Spark Application Performance Management

Apache Spark is a fast and general-purpose engine for large-scale data processing. It’s most widely used to replace MapReduce for fast processing of data stored in Hadoop. Designed specifically for data science, Spark has evolved to support more use cases, including real-time stream event processing. Spark is also widely used in AI and machine learning applications.

JWT Claims With Rate Limiting in Kong

In Kong, plugins can be thought of as policy enforcers. In the case of rate limiting, Kong offers two plugins: An open source one and Enterprise. Both plugins can limit requests per consumer, route, service or globally. Configuring the same plugin is also possible on a more than level. When this occurs, an order of precedence is used to determine which configuration to run. With this capability, it is possible to apply fine-grained policy control. In this article, we cover an advanced use case.