Network Configuration & Access Control
This guide provides detailed instructions for configuring critical networking components in Google Cloud Platform (GCP) prior to deploying a private Kubernetes cluster. It ensures a secure and scalable environment by leveraging Google Kubernetes Engine (GKE) best practices.
Introduction
A robust networking setup is essential for Kubernetes clusters in GCP to ensure private communication and controlled access to external resources. This guide covers the creation of a Virtual Private Cloud (VPC), a Cloud Router, Cloud NAT and access control policies.
Step 1: Create a VPC Network
The VPC will act as the foundational network for your Kubernetes cluster.
- Navigate to the GCP Console:
- Go to the VPC Networks page.
- Create the VPC:
- Name:
prod-0
. - Subnet Creation Mode: Automatic (or Custom if specific IP ranges are required).
- Name:
- Add a Custom Subnet:
- Name:
prod-subnet-0
. - Region:
us-central1
. - IPv4 Range:
10.2.204.0/22
. - Toggle Private Google Access to "ON" to allow instances without external IPs to access Google APIs and services.
- Name:
- Save and Apply.
For more information refer to Google Cloud VPC Network documentation.
Step 2: Configure a Cloud Router
Cloud Router manages dynamic route advertisements, a critical component for NAT.
- Navigate to the Cloud Router Page:
- Access the Cloud Router page.
- Create a Cloud Router:
- Name:
prod-router-0
. - Region:
us-central1
. - Network:
prod-0
.
- Name:
- Enable BGP (optional):
- Set up BGP if required for dynamic routing with on-premises networks.
- Save Configuration.
For more information on Cloud Router, please refer to Cloud Router documentation.
Step 3: Set Up Cloud NAT
Cloud NAT provides egress internet access for private Kubernetes nodes.
- Navigate to the Cloud NAT Page:
- Go to the Cloud NAT configuration.
- Create a Cloud NAT Gateway:
- Name:
prod-gateway
. - Region:
us-central1
. - Network:
prod-0
. - Router:
prod-router-0
.
- Name:
- Specify NAT Mapping:
- Recommended: Use automatic allocation of NAT IP ranges for simplicity.
- Enable Logging:
- For monitoring purposes, enable NAT logging.
- Save and Deploy.
For more information on Cloud NAT, please refer to Cloud NAT documentation.
Step 4: Access Control
GKE uses IAM service accounts attached to your nodes to handle essential system tasks like logging and monitoring. At a basic level, these node service accounts need the Kubernetes Engine Default Node Service Account role (roles/container.defaultNodeServiceAccount
) in your project.
By default, GKE assigns the Compute Engine default service account, which is automatically created for your project, as the node service account. For more details, check out the Kubernetes Engine Default Node Service Account and Compute Engine Default Service Account documentation.
Step 5: Verify and Integrate
After configuring the network, ensure integration with GKE by setting up your private Kubernetes cluster.
Private Cluster Creation:
- Use the GKE Cluster Creation Guide to set up a private cluster.
Integrate with Zero-Trust frameworks like Cloudflare and add following applications:
- IDHub Admin
- Application URL: [IDHUB_FQDN]/admin
- Keycloak Master Realm
- Application URL: [IDHUB_FQDN]/auth/admin/master/console/
- The
IDHUB_FQDN
will be the URL of IDHub application.- Please click here to get a detailed understanding of what is FQDN and how to configure.
Additional Considerations
- Use Google Cloud Monitoring to track network traffic.
- Enable Logging for System and Workloads
- Enable Shielded GKE Nodes
- Implement Cluster Upgrade from stable channel
- Enable HTTP Load Balancing