Skip to main content

Network Configuration & Access Control

This guide provides detailed instructions for configuring critical networking components in Google Cloud Platform (GCP) prior to deploying a private Kubernetes cluster. It ensures a secure and scalable environment by leveraging Google Kubernetes Engine (GKE) best practices.


Introduction

A robust networking setup is essential for Kubernetes clusters in GCP to ensure private communication and controlled access to external resources. This guide covers the creation of a Virtual Private Cloud (VPC), a Cloud Router, Cloud NAT and access control policies.

Step 1: Create a VPC Network

The VPC will act as the foundational network for your Kubernetes cluster.

  1. Navigate to the GCP Console:
  2. Create the VPC:
    • Name: prod-0.
    • Subnet Creation Mode: Automatic (or Custom if specific IP ranges are required).
  3. Add a Custom Subnet:
    • Name: prod-subnet-0.
    • Region: us-central1.
    • IPv4 Range: 10.2.204.0/22.
    • Toggle Private Google Access to "ON" to allow instances without external IPs to access Google APIs and services.
  4. Save and Apply.
tip

For more information refer to Google Cloud VPC Network documentation.


Step 2: Configure a Cloud Router

Cloud Router manages dynamic route advertisements, a critical component for NAT.

  1. Navigate to the Cloud Router Page:
  2. Create a Cloud Router:
    • Name: prod-router-0.
    • Region: us-central1.
    • Network: prod-0.
  3. Enable BGP (optional):
    • Set up BGP if required for dynamic routing with on-premises networks.
  4. Save Configuration.
tip

For more information on Cloud Router, please refer to Cloud Router documentation.


Step 3: Set Up Cloud NAT

Cloud NAT provides egress internet access for private Kubernetes nodes.

  1. Navigate to the Cloud NAT Page:
  2. Create a Cloud NAT Gateway:
    • Name: prod-gateway.
    • Region: us-central1.
    • Network: prod-0.
    • Router: prod-router-0.
  3. Specify NAT Mapping:
    • Recommended: Use automatic allocation of NAT IP ranges for simplicity.
  4. Enable Logging:
    • For monitoring purposes, enable NAT logging.
  5. Save and Deploy.
tip

For more information on Cloud NAT, please refer to Cloud NAT documentation.


Step 4: Access Control

GKE uses IAM service accounts attached to your nodes to handle essential system tasks like logging and monitoring. At a basic level, these node service accounts need the Kubernetes Engine Default Node Service Account role (roles/container.defaultNodeServiceAccount) in your project.

By default, GKE assigns the Compute Engine default service account, which is automatically created for your project, as the node service account. For more details, check out the Kubernetes Engine Default Node Service Account and Compute Engine Default Service Account documentation.

Step 5: Verify and Integrate

After configuring the network, ensure integration with GKE by setting up your private Kubernetes cluster.

  1. Private Cluster Creation:

  2. Integrate with Zero-Trust frameworks like Cloudflare and add following applications:

  • IDHub Admin
    • Application URL: [IDHUB_FQDN]/admin
  • Keycloak Master Realm
    • Application URL: [IDHUB_FQDN]/auth/admin/master/console/
note
  • The IDHUB_FQDN will be the URL of IDHub application.
    • Please click here to get a detailed understanding of what is FQDN and how to configure.

Additional Considerations

  • Use Google Cloud Monitoring to track network traffic.
  • Enable Logging for System and Workloads
  • Enable Shielded GKE Nodes
  • Implement Cluster Upgrade from stable channel
  • Enable HTTP Load Balancing