In this article, you will explore how users and workloads are authenticated with the Kubernetes API server. The Kubernetes API server exposes an HTTP API that lets end-users, different parts of your cluster, and external components communicate with one another. Most operations can be performed through kubectl, but you can also access the API directly using REST calls. But how is the access to the API restricted only to authorized users?
Flink has supported resource management systems like YARN and Mesos since the early days; however, these were not designed for the fast-moving cloud-native architectures that are increasingly gaining popularity these days, or the growing need to support complex, mixed workloads (e.g. batch, streaming, deep learning, web services). For these reasons, more and more users are using Kubernetes to automate the deployment, scaling and management of their Flink applications.
In this article, you will learn why Kubernetes uses etcd as a database by building (and breaking) a 3-node etcd cluster.
In this article, you'll look at Kafka's architecture and how it supports high availability with replicated partitions. Then, you will design a Kafka cluster to achieve high availability using standard Kubernetes resources and see how it tolerates node maintenance and total node failure.
This checklist provides actionable best practices for deploying secure, scalable, and resilient services on Kubernetes.
Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others. If you're using HTTP/2, gRPC, RSockets, AMQP or any other long-lived connection such as a database connection, you might want to consider client-side load balancing.
In Kubernetes resource constraints are used to schedule the Pod in the right node, and it also affects which Pod is killed or starved at times of high load. In this blog, you will explore setting resource limits for a Flask web service automatically using the Vertical Pod Autoscaler and the metrics server.
here's a diagram to help you debug your deployments in Kubernetes
In this article, you will learn how packets flow inside and outside a Kubernetes cluster. Starting from the initial web request and down to the container hosting the application.
Kubernetes natively offers the core tools necessary to manage application deployment. However, while applying raw YAML manifests is a straightforward process, developing in a microservice environment quickly spirals out of control with the number of deployments necessary to support an entire system. This article compares two popular tools that aim to simplify application deployment management - Helm and Kustomize.