
In this deep-dive virtual session, Paras Thakur will explore the principles, benefits, and practical patterns behind app...
119 RSVP'd
In this deep-dive virtual session, Paras Thakur will explore the principles, benefits, and practical patterns behind application scaling on Google Cloud using Google Kubernetes Engine (GKE). As organizations evolve and their user bases grow, infrastructure must scale not just in volume but intelligently, reliably, and cost-efficiently. This session helps you understand how GKE is uniquely positioned to support modern, dynamic workloads.
Why Kubernetes—and Why GKE?
Key differentiators that make GKE more than just “Kubernetes in the cloud.”
Google Cloud–native integrations (e.g. with IAM, networking, load balancing, autoscaling).
Comparison with other container orchestration or PaaS approaches.
Core Scaling Principles & Patterns
Horizontal Pod Autoscaling, Cluster Autoscaling, and node pool strategies
Managing stateless vs stateful services under scale
Designing for resilience: canary deployments, blue-green, rolling updates
Best Practices and Pitfalls
Optimizing resource requests/limits to avoid overprovisioning
Handling “noisy neighbor” issues, burst traffic, or unpredictable load spikes
Observability and monitoring: metrics, logs, alerts to guide scaling decisions
Real-World Use Cases & Lessons Learned
Examples from production systems (challenges, optimizations, trade-offs)
Migrating legacy workloads into Kubernetes and scaling them
Cost-efficiency strategies when scaling in the cloud
Interactive Q&A / Discussion
Attendees can bring forward their challenges
Live troubleshooting of scaling design scenarios
Technical Solutions Engineer