Google Cloud: Professional Cloud Architect (PCA) Exam Notes – Part VIII

Anthos

Anthos is not a single product, it’s an umbrella term for a suite of products. It is a 100% software-based solution and is portable by nature, it aims to help customers get the most out of all of their underlying hardware. The components are primarily managed services built on top of open-source tools.

The Anthos Dashboard in Google Cloud provides a secure, unified interface to view and manage applications. It helps with cluster management, observability into the health and performance of the services, configuration management views, and features such as multi-cluster ingress.

For hands-on learning, complete the Anthos Service Mesh quest in Qwiklabs.

Core Components

  • Policy Management
    • Anthos Config Management and Policy Controller
      • Single pane of glass for managing configurations and policies both on-prem and in the cloud
      • Great for multi-cluster management, and is declarative and continuous
      • Automates policy and security at scale
      • Uses a Git repository to create common configurations that can be applied to Kubernetes clusters in the Anthos fleet
      • Policy Controller allows you to manage policies at every stage of a deployment, inspection before, enforced at API calls, and auditing after deployment
        • Policies include things like enforcing labels, restricting TLS versions, requiring logging, restricting role bindings to your domain, etc.
  • Application Development & Deployment
    • GCP Marketplace (for Anthos compatible apps), Cloud Run for Anthos (Built with Knative), Cloud Build for Anthos, Binary Authorization
  • Service Management
    • Anthos Service Mesh (Built with Istio)
      • Anthos Service Mesh (ASM) is Google Cloud’s managed control plane and commercial version of Istio. It respects the Istio API’s. The sidecar proxy gets deployed as a container on every pod and handles all traffic going in and out of your service.
      • It can “inject chaos” for testing
    • Review Service Mesh Overview above to understand benefits
  • Container Management
    • Anthos GKE (Built on Kubernetes)
  • Operations Management
    • ServiceOps, Cloud Monitoring, and Cloud Logging

Technical Overview

Full Documentation

Computing Environments

  • Anthos on Google Cloud – Google Cloud hosts the control plane and the Kubernetes API server is the only control-plane component accessible to customers. GKE manages the nodes in the customer’s project with GCE instances (just like GKE above)
  • Anthos on-prem – Used to be only on VMware, all components are hosted on the customer’s on-prem virtualized environment. A single Anthos GKE on-prem and associated clusters cannot span multiple data centers. There are at least four components:
    • Admin Workstation: Linux workstation available as OVA file to perform installation, scaling, and upgrading using gkectl. Often used with a jumpbox
    • Admin Cluster: Manages lifecycle operations, customers are not charged for the admin cluster, it’s just a control plane
    • User Clusters: Runs the user workloads, has master and worker nodes, and gkectl is used to deploy user clusters (configured with config.yaml file)
      • You can have up to 10 User Clusters, and each cluster can have up to 100 nodes
    • Load Balancer: Exposes the workloads, could be the Bundled-LB, OOTP Google provided and supported Load Balancers like F5 BigIP, Citrix, NSX
    • Installing On Bare Metal is now an option so you don’t need VMware and can still take advantage of existing enterprise infrastructure
  • Anthos on AWS – All components are hosted in the customer’s AWS environment
  • Attached Clusters – This deployment option for Anthos is used when the Kubernetes distribution is offered through another cloud. Anthos does not manage the Kubenetes control plane or node, just the services running on the cluster. This is ideally a transitory state while customers fully migrate to Anthos. There is an agent that gets installed the “GKE Connect Agent” that makes outbound connections to the Connect service.

Cloud Run for Anthos provides a developer-focused experience by abstracting away the underlying infrastructure. Powered by Knative and is a deployment option (like a destination) for Cloud Run.

Anthos Fleets (Formerly Environs)

Fleets are a way to organize clusters that lets administrators group related infrastructure and services so they can manage multi-cluster capabilities and apply consistent policies across them.

Grouping: Logically group related infrastructure resources

Management: Concepts and administration APIs multi-cluster features

Exclusivity: Clusters can only be part of 1 fleet

Cardinality: 1 fleet per Hub, 1 Hub per project – by design

All driven through Connect: Provides connectivity and authorization for Google automation and Cloud Console UI on remote Kubernetes clusters

Group what infrastructure?

  • Kubernetes Clusters (VMs in the future)
    • GKE, GKE On-Prem, On-AWS
    • Anthos Attached 3rd Party Clusters
  • Examples of groups: “All production clusters” or “LOB’s Staging Cluster”

Manage what multi-cluster capabilities?

  • Anthos Config Management – Define a config/policy across a fleet with audit and drift detection
  • Anthos Service Management – Define a service-mesh across members of a fleet 
  • IfA – Load-balance external traffic across fleet for lowest-latency and HA

Why does this exist?

Compute Anywhere: Need to provide compute in several physical locations and cloud environments to address:

  • Legacy infrastructure or services
  • Regulatory requirements
  • Developer choice
  • Geographic proximity
  • Hardware specialization
  • Availability

Isolation: Need to support many clusters for isolation to allow:

  • Workload tiers
  • Security and compliance
  • Environment (dev/test/prod)
  • Controlling blast radius
  • Tenant separation

Many Clusters are hard to manage

  • Platform Operator Challenges
    • Create/maintaining computing platforms
    • Managing common services across the platforms
    • Securing the platform
    • Providing development teams with access to the platforms
  • Service Operator Challenges
    • Deploying apps across multiple clusters for HA
    • Debugging, monitoring, and maintenance

Fleet Implementation

Users add their infrastructure resources to a fleet, by registering (and connecting) those clusters to their GCP project using Connect. Features are enabled and configured across all infrastructure in the fleet.

Migrate for Anthos

Use Migrate for Anthos to convert VM workloads into containers running on GKE or Anthos clusters. You can also use Migrate for Compute Engine. Make sure your workload is suitable for migration (supported OS, not a big fat database, etc.) and follow the planning steps and best practices.