Rancher is an open source, user-friendly and lightweight Kubernetes management platform capable of managing your Kubernetes cluster on various infrastructures – from bare metal servers to public clouds, VMware or even Raspberry Pi.
With this wider scope, elasticity and management capabilities, Rancher has a competitive advantage as one of most-preferred Kubernetes Enterprise platforms.
We have significant experience using Rancher in customer projects. Rancher provides us with great capabilities -- and each release has important new features. As a Rancher Platinum partner, we have seen Rancher develop from the early 1.x versions to the current 2.5 release, and we are happy to have been a part of this progress.
In this blog post, I will talk about what I see as the most significant capabilities in Rancher v2.5.
One of the best things about Rancher is its user-friendly interface. For example, developers can create their own disk volumes without any command-line or hypervisor-evel actions within allowed limits and namespaces by Kubernetes clusters.
Rancher Cluster Manager, the primary UI in Rancher since version 2.0, lets us configure and manage our clusters and access the Rancher API. Now in Rancher 2.5, we have another UI option: the Cluster Explorer dashboard. Released as an experiment in Rancher 2.4, the Cluster Explorer Dashboard has graduated to GA status in Rancher 2.5. This dashboard offers some unique features, which make it a valuable option for managing your clusters. With Cluster Explorer, you can create and update various resources via YAML, such as Calico HostIP and BGP Route.
Cluster Explorer also shows detailed information about your nodes, including downloaded images on worker nodes.
UI capabilities are important for infrastructure and developer teams. For example, while we are troubleshooting, we can easily switch between clusters and access them via kubectl, like an IDE.
Etcd backup is a common approach for creating a cluster backup. However, with Rancher 2.5, you can back up Kubernetes objects from their metadata without accessing etcd. This allows you to run a more useful backup process: you can perform ad-hoc or scheduled backups of the Rancher application directly from the Rancher dashboard and restore data into any Kubernetes cluster.
You can also store your backups in two places -- S3 and minio -- instead of local disk location because remote backups can easily restore your cluster from metadata.
If you want to take advantage of this new capability, deploy the `rancher-backup` chart on your cluster from the Rancher Apps Catalog like this:
Rancher-backup is a useful project to define a backup object, such as a Kubernetes CRD, as shown below.
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: cluster-backup
spec:
storageLocation:
s3:
credentialSecretName: s3-secret
credentialSecretNamespace: backup
bucketName: rancher-backups
folder: kubernetes
region: eu-west-1
endpoint: s3.eu-west-1.amazonaws.com
resourceSetName: rancher-resource-set
Rancher Backup CRD objects allow you to define backup processes based on as-a-code principle in your cluster. You can specify and easily document your backup flow with Kubernetes specified CRDs.
You only need to write the name of your bucket, endpoint of the object storage cluster, and other required options requested by the object storage providers.
Besides the CRD option, Cluster Explorer allows you to define a backup manifest using the UI, such as this:
With the rancher-backup project, you can easily define backup policies and create a periodic backup window.
You can find various CRD examples here or take a look into Cluster Explorer.
In the enterprise world, there are a lot of strict security rules on firewalls, operating systems, hardware etc. Sometimes it is hard to implement and apply those rules in a Kubernetes-based infrastructure.
Because Kubernetes distributes our containers between instances located on a datacenter, the IP addresses of the pods always change. Besides the pods, the IP of the instances can change. So, when we want to manage firewall rules to make IP restrictions for egress traffic, it is difficult to track IP changes of instances. If you want to route your internal traffic via fewer IP blocks, you can route via nodes configured as egress gateway.
New in Rancher 2.5, Istio Egress Gateway (EGW) allows you to route your traffic through specified worker nodes while accessing your outgoing resources.
As you can see in the following diagram, Istio schedules a sidecar container responsible for routing and management of traffic by the rules. When we set up an Egress Gateway rule for specified domain addresses, the sidecar container will route to Egress Gateway instances.
We can route our traffic from a single IP to outside of the cluster, no matter how many worker nodes we have.
With Rancher v2.5.0 you can easily deploy Egress gateways and configure them easily with the Cluster Explorer.
Security is a big concern for platform and infrastructure teams who need to keep up with changing security requirements and ensure compliance in application workload components.
As part of the Rancher 2.5 release, Rancher announced a new, easy-to-install Kubernetes distribution engineered to adhere to the robust security requirements of public sector organizations and highly regulated environments.
RKE2 (or RKE Government) is Rancher’s next-generation Kubernetes distro. RKE2 adds several security enhancements to Rancher’s catalog, including Security-Enhanced Linux (SELinux) via containerd (an industry first) and the first fully FOSS FIPS-140-2 validated Kubernetes encryption module (FIPS is the Federal Information Processing Standard).
RKE2 provides defaults and configuration options that allow clusters to pass the CIS Kubernetes benchmark with minimal operational burden. It also regularly scans components for CVEs using trivy in the build pipeline.
Kubernetes clusters can run containers with a couple of runtime options like runc and containerd.
The common features of these runtime environments are that security modules such as apparmor and selinux arrive by default and run containers without performance loss.
RKE 2 security compliance includes security compliance of Kubernetes components at build time. Each component is built by a golang compiler, which is verified by FIPS-140.
With Rancher 2.5 and RKE2, you can run CIS benchmark tests against your cluster to detect any security abnormalities.
In the image below, you can see that sample output from a CIS Benchmark with the rules from the RKE2 perspectives.
In summary, RKE2 allows you to enforce security compliance by default with a hardened, and zero license cost.
If you want to get more detailed information about RKE2 please check out the docs.
Rancher 2.5.x manages a lot of actions like CIS scans, cluster management (EKS, GKE, etc.), and backups with operators at the Kubernetes level. This approach makes troubleshooting much easier and helps you better understand the cluster and its maintenance problems.
For example, when you set up a cluster with EKS, Rancher deploys a controller for each cluster. This saves time at the provisioning or maintenance level.
Kubernetes is making quarterly releases and improving quickly. Enterprises need a way to keep up to speed, and Rancher is the answer. As the enterprise Kubernetes management platform, Rancher 2.5 lets you streamline and secure cloud-native application delivery from core to cloud to edge. New and improved capabilities around cluster automation, integration of identity management systems, and enterprise/government-level security features support the move to cloud native.
In this blog, we have discussed a few of the enhancements in Rancher 2.5 that focus on the visualization of clusters and management efficiency at the automation level. As a Rancher Platinum Partner, we are happy to see these capabilities in Rancher and excited to use them in new projects.