In a large development project, it may be beneficial to share resources amongst individual teams, for instance, the build server, artifact repository, or code quality system.
The challenge arises when the number of project participants is growing fast, and thus also the load on common infrastructure systems. A common solution to this is to allocate more compute resources, but what if we could rather allocate their resources on demand?
In this presentation, we'll investigate one solution that utilizes Kubernetes to avoid long queues on the build server. In addition, we'll see how one can dynamically scale the Kubernetes cluster on Google Cloud Platform to reduce costs during low periods of activity, but also handle bursts in traffic in a satisfactory manner.
Lastly, I'll share some of the experiences we've had with such a solution and some of the things you should look out for if you wish to do something similar.
Lars Martin Pedersen is a passionate full-stack developer that enjoys a challenge and likes to be involved in every technical part of a project.
Fairly recently he was appointed the task of being responsible for Cloud OPS in his team, and he would like to share some of the insights of his experiences with you.