Introduction
Welcome to the Azure AKS Kubernetes deployment security Workshop.
We won’t spend too much time on the presentation of AKS, the service that has been very popular in recent months.
In brief, AKS is Microsoft’s new managed container orchestration service. It is gradually replacing Azure Container service and focuses only on the Cloud Native Computing foundation (CNCF) Kubernetes orchestration engine.
In the last workshop: Create a Kubernetes cluster with Azure AKS using Terraform, we have discussed the Azure Kubernetes Service (AKS) basics, the Infrastructure as Code (IaC) mechanism with a focus on Hashicorp Terraform and how to deploy a Kubernetes cluster with AKS using Terraform.
With this lab, you’ll go through tasks that will help you master the basic and more advanced topics required to secure Azure AKS Kubernetes cluster at the deployment level based on the following mechanisms and technologies:
- ✅Azure AD (AAD)
- ✅AKS with Role-Based Access Control (RBAC)
- ✅Container Network Interface (CNI)
- ✅Azure Network policy
- ✅Azure Key Vault
This article is part of a series:
- Secure AKS at the deployment: part 1
- Secure AKS at the deployment: part 2
- Secure AKS at the deployment: part 3
Assumptions and Prerequisites
- You have basic knowledge of Azure
- Have basic knowledge of Kubernetes
- You have Terraform installed in your local machine
- You have basic experience with Terraform
- Azure subscription: Sign up for an Azure account, if you don’t own one already. You will receive USD200 in free credits.
Implement Azure AD to secure AKS at the deployment
In order to secure AKS at the deployment level, Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AAD) for user authentication. In this configuration, you can sign in to an AKS cluster by using your Azure AD authentication token.
1- Azure networks solutions and AKS deployment
The default deployment of AKS proposed by Azure hides a lot of things that are happening in the background.
Indeed, we do not know where the cluster is deployed or how the network is configured. By default, the network plugin used is kubenet, which is good for testing but does not allow us to test all the possibilities of AKS.
In this part, let’s take a few assumptions:
- The underlying Azure network is already in place
- We will use Azure CNI for our AKS cluster
Provisioning an Azure Virtual Network (Vnet) is an essential step before deploying the AKS cluster. The main reason is closely related to the choice of the CNI. With CNI Azure, Kubernetes nodes but also pods rely on the private IP addresses of the VNet and more specifically of the target subnet for deployment.
Theoretically, to deploy an AKS cluster capable of hosting an appropriate number of workloads (i.e. pods), the network design should be meticulously carried out in order to provide sufficient IP address space for the AKS cluster.
Azure documentation gives us the following formula to calculate the minimum size of the target subnet for an AKS cluster, according to the number of workloads:
(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)
Example for a 50 node cluster: (51) + (51 * 30 (default)) = 1,581
(/21 or larger)
Example for a 50 node cluster that also includes provision to scale up an additional 10 nodes: (61) + (61 * 30 (default)) = 1,891
(/21 or larger)
ℹ️ Note
The maximum number of pods per node configured to 30 by default with Azure CNI. Fortunately, this is a soft limit that can be changed to 110 pods per node, either at the of deployment level, or after deployment with az cli for example.
With Azure CNI, we can also use Network Policies in Kubernetes.
Since we want to secure the Azure AKS Kubernetes cluster at its deployment, network policies are required.
2- Integrate AKS with Azure AD
Prerequisites
Since AKS is a service managed by Microsoft, it provides an interesting features such as integration with Azure Active Directory.
For a company already using Azure AD as a source of identity, either from synchronization with an LDAP on premise, or in Cloud Native mode, the possibility of using Azure AD directly to authenticate AKS users is a big advantage.
In addition, because of Azure AD’s ability to force Multi Factor Authentication (MFA) or not, a user with MFA enabled will be forced to use their authentication device to access AKS. Although a little more restrictive, the use of MFA is to be taken as a good practice.
Authentication details
Azure AD authentication is provided to AKS clusters that have OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol.
For more information about OpenID Connect, see Authorize access to web applications using OpenID Connect and Azure AD.
Without going into too much detail here, let’s summarize how it works:
1- AAD App Server
An application registered with AAD is required. This application, called a server application, is associated with the AKS cluster and is used to retrieve group memberships from AAD users.
To be able to perform this function, the application needs to have rights on the Microsoft Graph API:
– Application access: Read directory data
-Delegated permissions: Sign in and read user profile and Read Directory data
Through this application and the associated Service Principal (SP), the AKS cluster becomes able to verify the identity of the authenticating user.
2- AAD App client
A second application, “Native App”, qualified client, accessing the first application is required.
Create Azure AD server component
This section shows you how to create the required Azure AD components. You can also complete these steps using the Azure portal.
For the complete sample script used in this lab, see Azure CLI samples – AKS integration with Azure AD.
Assumptions
– Azure CLI version 2.0.61 or later installed and configured is required. Run az --version
to find the version. If you need to install or upgrade, see Install Azure CLI.
– Create a variable for your desired AKS cluster name. The following example uses the namerun-it-on-cloud
1- Create the AAD App server
Create the server application component using the az ad app create
command, then update the group membership claims using the az ad app update
command.
2- Create the SP
Now let’s create a service principal for the server app using the az ad sp create
command. This service principal is used to authenticate itself within the Azure platform. Then, get the service principal secret using the az ad sp credential reset
command and assign to the variable named serverApplicationSecret for use in one of the following steps:
3– Set AAD App permissions
Assign these permissions using the az ad app permission add
command and grant the permissions assigned in the previous step for the server application using the az ad app permission grant
command. This step fails if the current account is not a global admin. You also need to add permissions for Azure AD application to request information that may otherwise require administrative consent using the az ad app permission admin-consent
:
4- Provision the AAD App client
The second Azure AD App is used when a user logs to the AKS cluster with the Kubernetes CLI (kubectl
).
in this section, will create the client App, the associated service principal SP and finally set the necessary permissions
The the complete sample script used in this section:
Conclusion
To conclude, in this first part, we have gathered the prerequisites to use Azure AD as a source of identity for an AKS cluster.
Next steps
In the next part, we will use Terraform and Azure CLI to deploy an AKS cluster using the AAD services created to implement the Kubernetes RBAC authentication.
3 thoughts on “Secure AKS at the deployment – part 1 –”