Kubernetes comes with certain built-for-the-cloud security features such as role-based access control and admission controllers that intercept requests to the API server. However, enabling more fine-grained access control, encryption, and auditablity, on top of Kubernetes's native security attributes, is important to ensure the deployments are tight and well in sync with FedRAMP mandates. Here are some key "hardening" measures for Kubernetes to help minimize risks and secure clusters even further.
Access Control
Role-based access control (RBAC) helps regulate which users/set of users across various teams or projects have access to Kubernetes resources (pods, services, replication controllers) and at what level. This is a set of additive permissions, and there is no option to deny access to users. Permissions are granted to users based on their need to perform actions on resources in a cluster, and this is defined by their roles in the organization. Kubernetes uses logical isolation to segregate a set of virtual machines (namespaces) and limit user access to namespaces, applications that run on them, and data located on them. This also helps beef up data security and compliance requirements by putting restrictions on cluster-wide access by users.
Access to pods with secure socket shell (SSH) server installed can be limited to users whose identity has been established via their public and private keys. This will limit access to containers to acceptable traffic only coming via the SSH protocol from outside the cluster.
Identification and Authentication
Authorization, as discussed above, is only half the game. The other half is about authentication. Without proper authentication, a scammer can pretend to be "A," someone you trust, when he is in reality B, and access sensitive data or inject malware into systems. Kubernetes supports tokens to establish two-way trust between API server nodes and worker nodes. Besides, it supports proxy servers responsible for authentication and client certificates for authenticating primary node agents to the API server.
Kubernetes is capable of supporting users from a local database account. In addition, it can leverage identity service providers (IdP) to protect data more securely against threats by limiting unauthenticated "calls" to APIs. Besides, Kubernetes works with several IdPs (e.g., Okta, Google OpenID Connect, and Azure Active Directory), which facilitate single sign-on as well as authenticated access to Web/API services. Even so, the Kubernetes platform doesn't natively integrate with such IdPs.
There are several in-market identity services (such as Dex) that can serve as a beachhead beween Kubernetes and various IdPs that use the SAML protocol. In this scheme of things, service providers send users to IdPs, which store and verify their login credentials, before redirecting them back to the service providers. Besides, limiting unauthenticated access to APIs that deal with sensitive data, this identity layer saves service providers the time and effort involved in managing passwords internally.
Login Attempts
Another tactic to hinder potential threats is to configure the web administration interface such that a user is locked out of an account/node, say, for "x" minutes, after she/he exceeds a preset number of consecutive failed login attempts. The lockout is lifted after the time period has elapsed, and adding this delay helps to slow down malicious attacks. This applies where users are supported from a local database. When it comes to single sign-on accounts, it's left to the IdPs, after careful consideration, to settle on the number of failed login attempts and lockout time period.
Session Control
Where a user is logged in to multiple nodes within a cluster, there is elevated risk of login credentials being misused and passwords of user accounts illegitimately reset. Limiting the total number of concurrent connections (via SSH) to nodes in a cluster by a single client (user) will help stave off this threat. Of course, more than one terminal session, enabled simultaneously on a single screen (Tmux), will count as a single session. User attempts to exceed the maximum number of session channels is promptly logged on to a database of access events and used for audit purpose.
The load balancing solution can be configured to set the "session sticky time" between a user and server to a maximum of three hours. On top of that, the information system can prepped to terminate a user session when a predefined condition is satisfied or in the event of any authenticated attempt to access restricted data. Plus, the idletime, in minutes, can be specified, so the server can terminate a session as soon as the specified period of inactivity expires, thus releasing applications from inactive connections. Users should also be able to exit a remote session at will.
Certificates
Kubernetes uses certificates issued by trusted certificate authorities (CAs) to establish trust. These include node CAs and user CAs to authenticate nodes and users respectively. By default, user CAs contain a "time to live" date after which they expire. When the certificate nears expiration, the kubelet will automatically request a new certificate.
Remote Access
Remote access to an API server from anywhere on the Internet could potentially leave data vulnerable to theft, misuse, unauthorized changes, and corruption. To address this risk, it is important that all remote connections are routed without fail through an authentication gateway and allowed access only on a need-basis. Such proxies serve as a single point for managing remote access to the server and use secure SSH or HTTP/TLS protocol to authenticate and encrypt data moving between users and the server. The SSH or TLS sessions can be further protected with x509 certificates. These make use of a public key infrastructure framework to uniquely identify and authenticate users to a server or remote device, thus securing the communication between them. Importantly, the SSH and TLS certificates are deleted from the hard disk at logout. Likewise, cookies are deleted from the browser to avoid the risk of them being hijacked by threat actors in order to access browsing sessions.
Trusted Clusters
An open-source proxy server (e.g., teleport), which understands SSH and TLS protocols, can be utilized to establish trusted connections for users across Kubernetes clusters to access services located behind firewalls. This obviates the need for open static ports in the firewall for TCP access. Further, this will enable authenticated users to process, store, and transfer information from outside a cluster. The nature and extent of such access is based on the action the user is required to perform on Kubernetes resources as per her/his role in the organization.
Audit and Accountability
Kubernetes also maintains a chronological record of activities by users, applications that use the API server, as well as by the server itself in non-volatile storage at the backend in human-readable format. These serve as sequential records of various changes that occur within a cluster and capture details around events such as:
- What happened and when?
- Who (individual or group) triggered it?
- From where was it triggered?
- What was the outcome
By making use of an open-source proxy server it should be possible to capture more audit records such as unsuccessful login attempts, file transfers, file system changes, network activities during sessions, and commands executed on SSH server. Additionally, customers should also consider running their Kubernetes clusters in FedRAMP-compliant clouds to ensure critical government data is well protected against security breaches. Typically, the FedRAMP compliance of the cloud does not involve any increase in cloud service costs for customers.
That was our quick take on how to add some more guardrails to the Kubernetes environment. Stay in the loop. Will be back soon with more security updates.