Episode: 2961 Title: HPR2961: Kubernetics / Cloud - Terminology Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2961/hpr2961.mp3 Transcribed: 2025-10-24 13:55:50 --- This is HPR episode 2961 from Monday the 9th of December 2019. Today's show is entitled Cubernetics Cloud Terminology. It's hosted by Daniel Pearson's and is about 11 minutes long and carries a clean flag. The summary is, we talk about terms often used when using Kubernetes. This episode of HPR is brought to you by an honesthost.com. Get 15% discount on all shared hosting with the offer code HPR15. That's HPR15. Better web hosting that's honest and fair at an honesthost.com. Hello hackers and welcome to another podcast. Today I'm going to talk a little bit about cloud environments like Cubernetics and some of the words or the terminology that is required to know in order to get into Cubernetics. Because Cubernetics has a lot of different words and different terms that are a little bit strange and that you might not run into if you haven't been in these kinds of environments. So let's start with the word node. And node is something that will run your jobs. So for instance, it could be a physical server, it could be a virtual server, it could be something that can actually put some load on where you have some CPU, some memory and so on. So you can run something on it. So that's a node. And nodes are constructed into clusters. You have a lot of nodes that you group into clusters and these clusters can run jobs for you. So for instance, if you want, you can say that I want to run this amount of work with a lot of different Docker containers, for instance, on this cluster. And then it's up to Cubernetics to figure out which node to select in order to run these jobs. So you can put a few jobs on one node and a few jobs on another node. A container is a Docker container, for instance, that's something where you create a unit that you can actually run on a node and that can do the job for you. So this container can, for instance, have a Linux environment or a very stripped down Linux environment where you put, for instance, a web server or you can put the MySQL server. You can have some very simple logic node that will just take some input and create some output. You can do whatever you want with a container. The important part is that it actually is one unit that you can deploy somewhere and it's built in such a way that it's simple for it to actually run on a node. Next up, we have a port and a port is either one container or multiple containers. And this is the unit of work that you can scale up and down. So for instance, if you have one port, you put that on a node and you say that, okay, this is my amount of work. And now I need to actually restart this service that I put up or this thing that I want to run. So then I restart that port and that will restart and it will go up again and we will do the work for you. And the important part with containers and pods is that they should be self-contained and they should often not contain any data. So you put the data outside of the pod or you have a specific pod that handles the data like a database server if you want to run that kind of work in your environment. But usually you have the data separated from the pod so you can restart it and end up in a state that are similar to where you were before you restarted it. So it should just be a compute unit. And in order to solve this, you have volumes. So you connect the different volumes to one pod that will actually be disk space or some external resource where you can keep state for this pod. So a pod should just be a compute unit and you should have the state outside of the pod. And next up we can have a concept called replica set. There is a few different ways to actually scale pods but one of them that is very common is replica set. And in a replica set you can say I want a minimal of 3 pods but a maximum of 6 pods and depending on the load on the actual work, cubinetics will scale this up so you can either have 3 pods or 6 pods or somewhere in between. The next thing I want to talk about is services. And services is something that runs in your clusters and helps you with network or other tasks that are not specific to any pod but can help different pods in order to get the network solved. Some of these examples could be for instance load balances which could load balance between a lot of different pods or could even load balance between for instance two replica sets where you have some A, B testing going on. And then you can have for instance one service that is common is a search manager that could actually be set up so you can have lets encrypt, running in your cloud environment and giving each of the pods signs certificates for a specific domain and they should be valid SSL certificates and this certificate manager would update and see that you actually have a valid certificate for each of these work units. Another thing that is very important is to have an ingress service and this ingress service would handle the communication with the outside world. So for instance if you set up a few pods in your docker or your cubanetics environments no one can actually talk with those pods so you can have traffic with inside of cubanetics and nobody can read that traffic. But if you want to get out of the cubanetics cloud you set up an ingress that you tell on this port you want to talk to this replica set or these pods with this name and send the traffic to this port on these pods. So the ingress will handle the actual communication between the outside world and your pods. And to take everything that we have talked with so far into one unit you can actually deploy it you have a deployment and in this deployment configuration you can say I want these kinds of services, I want these pods, I want these kind of replica sets and these kinds of ingress rules and when you have set all of that up you can actually send one deployment to your cubanetics cloud and it will bring that deployment up as one unit and you can take that unit and stop it as well if you don't need that service anymore. So deployment could be let's say 100 pods of different kinds you can have web servers my skills servers worker units everything in one deployment and send that to your cubanetics cloud so it's some kind of grouping of all of these things in one. And the last thing I want to talk about is configuration maps and configuration maps could be set up to your deployments so you can actually do some configuration steps on the fly in your cubanetics cloud so you can put in some configuration values that you want to change during the run of this cubanetics cloud so you can actually change something that is already running without deploying it again. So this was what I wanted to talk about today if you want to follow more of what I'm doing I have a YouTube channel just search for my name and you will find that if you have any questions about cubanetics and so on please comment and I will read those and perhaps create another episode with that I hope that you learned something today I hope that you liked this episode and I hope to see you in the next one you've been listening to hecka public radio at hecka public radio dot org we are a community podcast network that releases shows every weekday Monday through Friday today's show like all our shows was contributed by an HBR listener like yourself if you ever thought of recording a podcast then click on our contribute link to find out how easy it really is hecka public radio was founded by the digital dog pound and the infonominant computer club and it's part of the binary revolution at binwreff.com if you have comments on today's show please email the host directly leave a comment on the website or record a follow-up episode yourself unless otherwise status today's show is released on the creative comments attribution share a light 3.0 license