Kubectl Commands

On this post I have added some quick Kubectl commands to get started with Kubernetes

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” create deployment hello-node –image=gcr.io/hello-minikube-zero-install/hello-node

deployment.apps/hello-node created

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” get all

NAME                              READY   STATUS    RESTARTS   AGE

pod/hello-node-55b49fb9f8-c9xx5   1/1     Running   0          53s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE

service/kubernetes   ClusterIP   10.245.0.1   <none>        443/TCP   115d

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/hello-node   1/1     1            1           56s

NAME                                    DESIRED   CURRENT   READY   AGE

replicaset.apps/hello-node-55b49fb9f8   1         1         1       55s

Newyork:.kube esumit$

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” get events

LAST SEEN   TYPE     REASON              OBJECT                             MESSAGE

3m5s        Normal   Scheduled           pod/hello-node-55b49fb9f8-c9xx5    Successfully assigned default/hello-node-55b49fb9f8-c9xx5 to fccollect-bnx8

3m3s        Normal   Pulling             pod/hello-node-55b49fb9f8-c9xx5    Pulling image “gcr.io/hello-minikube-zero-install/hello-node”

2m36s       Normal   Pulled              pod/hello-node-55b49fb9f8-c9xx5    Successfully pulled image “gcr.io/hello-minikube-zero-install/hello-node”

2m36s       Normal   Created             pod/hello-node-55b49fb9f8-c9xx5    Created container hello-node

2m35s       Normal   Started             pod/hello-node-55b49fb9f8-c9xx5    Started container hello-node

3m5s        Normal   SuccessfulCreate    replicaset/hello-node-55b49fb9f8   Created pod: hello-node-55b49fb9f8-c9xx5

3m5s        Normal   ScalingReplicaSet   deployment/hello-node              Scaled up replica set hello-node-55b49fb9f8 to 1

Newyork:.kube esumit$

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” expose deployment hello-node –type=LoadBalancer –port=8080

service/hello-node exposed

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” get services

NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE

hello-node   LoadBalancer   10.245.120.20   159.65.211.207   8080:31568/TCP   105s

kubernetes   ClusterIP      10.245.0.1      <none>           443/TCP          115d

K8 HelloWorld

 

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” get pods

NAME                          READY   STATUS    RESTARTS   AGE

hello-node-55b49fb9f8-c9xx5   1/1     Running   0          26m

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” describe hello-node-55b49fb9f8-c9xx5

error: the server doesn’t have a resource type “hello-node-55b49fb9f8-c9xx5”

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” describe pod hello-node-55b49fb9f8-c9xx5

Name:           hello-node-55b49fb9f8-c9xx5

Namespace:      default

Priority:       0

Node:           fccollect-bnx8/10.131.235.122

Start Time:     Thu, 26 Dec 2019 20:11:05 +1000

Labels:         app=hello-node

                pod-template-hash=55b49fb9f8

Annotations:    <none>

Status:         Running

IP:             10.244.0.249

Controlled By:  ReplicaSet/hello-node-55b49fb9f8

Containers:

  hello-node:

    Container ID:   docker://26108e00ae2983c191211e6d754b251c613618bdf7c9d9d36f1f1fac466054f4

    Image:          gcr.io/hello-minikube-zero-install/hello-node

    Image ID:       docker-pullable://gcr.io/hello-minikube-zero-install/hello-node@sha256:9cf82733f7278ae7ae899d432f8d3b3bb0fcb54e673c67496a9f76bb58f30a1c

    Port:           <none>

    Host Port:      <none>

    State:          Running

      Started:      Thu, 26 Dec 2019 20:11:35 +1000

    Ready:          True

    Restart Count:  0

    Environment:    <none>

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gf5v2 (ro)

Conditions:

  Type              Status

  Initialized       True

  Ready             True

  ContainersReady   True

  PodScheduled      True

Volumes:

  default-token-gf5v2:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-gf5v2

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason     Age   From                     Message

  —-    ——     —-  —-                     ——-

  Normal  Scheduled  27m   default-scheduler        Successfully assigned default/hello-node-55b49fb9f8-c9xx5 to fccollect-bnx8

  Normal  Pulling    27m   kubelet, fccollect-bnx8  Pulling image “gcr.io/hello-minikube-zero-install/hello-node”

  Normal  Pulled     27m   kubelet, fccollect-bnx8  Successfully pulled image “gcr.io/hello-minikube-zero-install/hello-node”

  Normal  Created    27m   kubelet, fccollect-bnx8  Created container hello-node

  Normal  Started    27m   kubelet, fccollect-bnx8  Started container hello-node

Newyork:.kube esumit$

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” cluster-info

Kubernetes master is running at https://91a32bee-1c2d-4fa8-8ade-1d2e5329645a.k8s.ondigitalocean.com

CoreDNS is running at https://91a32bee-1c2d-4fa8-8ade-1d2e5329645a.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Newyork:.kube esumit$

Newyork:.kube esumit$ kubectl –kubeconfig=”k8-fccollect.yaml” api-versions

admissionregistration.k8s.io/v1beta1

apiextensions.k8s.io/v1beta1

apiregistration.k8s.io/v1

apiregistration.k8s.io/v1beta1

apps/v1

apps/v1beta1

apps/v1beta2

authentication.k8s.io/v1

authentication.k8s.io/v1beta1

authorization.k8s.io/v1

authorization.k8s.io/v1beta1

autoscaling/v1

autoscaling/v2beta1

autoscaling/v2beta2

batch/v1

batch/v1beta1

certificates.k8s.io/v1beta1

cilium.io/v2

coordination.k8s.io/v1

coordination.k8s.io/v1beta1

events.k8s.io/v1beta1

extensions/v1beta1

networking.k8s.io/v1

networking.k8s.io/v1beta1

node.k8s.io/v1beta1

policy/v1beta1

rbac.authorization.k8s.io/v1

rbac.authorization.k8s.io/v1beta1

scheduling.k8s.io/v1

scheduling.k8s.io/v1beta1

snapshot.storage.k8s.io/v1alpha1

storage.k8s.io/v1

storage.k8s.io/v1beta1

v1

Newyork:.kube esumit$

 

cat k8sexperiment-kubeconfig-latest.yaml

  504  kubectl –kubeconfig=”k8sexperiment-kubeconfig-latest.yaml” get nodes

  505  kubectl –kubeconfig=”k8sexperiment-kubeconfig-latest.yaml” get nodes

  506  cd ~/.kube/

  507  ls

  508  kubectl –kubeconfig=”k8s-fccollect-kubeconfig.yaml” get nodes

  509  kubectl get nodes

  510  cat k8s-fccollect-kubeconfig.yaml

  511  ls

  512  ls

  513  sudo vi k8s-fccollect-kubeconfig.yaml

  514  sudo vi k8-fccollect.yaml

  515  kubectl –kubeconfig=”k8-fccollect.yaml” get nodes

  516  kubectl help

  517  kubectl api-versions

  518  kubectl –kubeconfig=”k8-fccollect.yaml” api-versions

  519  kubectl –kubeconfig=”k8-fccollect.yaml” cluster-info

  520  kubectl –kubeconfig=”k8-fccollect.yaml” api-resources

  521  kubectl –kubeconfig=”k8-fccollect.yaml” get-clusters

  522  kubectl –kubeconfig=”k8-fccollect.yaml” get-clusters

  523  kubectl –kubeconfig=”k8-fccollect.yaml” get-contexts

  524  kubectl –kubeconfig=”k8-fccollect.yaml” get

  525  kubectl –kubeconfig=”k8-fccollect.yaml” explain

  526  kubectl –kubeconfig=”k8-fccollect.yaml” nodesexplain

  527  kubectl –kubeconfig=”k8-fccollect.yaml” nodes explain

  528  kubectl –kubeconfig=”k8-fccollect.yaml” version

  529  kubectl –kubeconfig=”k8-fccollect.yaml” get pods

  530  kubectl –kubeconfig=”k8-fccollect.yaml” current-context

  531  kubectl –kubeconfig=”k8-fccollect.yaml” create deployment hello-node –image=gcr.io/hello-minikube-zero-install/hello-node

  532  kubectl –kubeconfig=”k8-fccollect.yaml” get all

  533  kubectl –kubeconfig=”k8-fccollect.yaml” get events

  534  kubectl –kubeconfig=”k8-fccollect.yaml” expose deployment hello-node –type=LoadBalancer –port=8080

  535  kubectl –kubeconfig=”k8-fccollect.yaml” get services

  536  kubectl –kubeconfig=”k8-fccollect.yaml” get pods

  537  kubectl –kubeconfig=”k8-fccollect.yaml” describe hello-node-55b49fb9f8-c9xx5

  538  kubectl –kubeconfig=”k8-fccollect.yaml” describe pod hello-node-55b49fb9f8-c9xx5

  539  pwd

  540  cd ~

  541  ls

  542  cd Documents/

  543  ls

  544  mkdir k8s

  545  cd k8s

  546  sudo vi busybox.yaml

  547  kubectl –kubeconfig=”k8-fccollect.yaml” create -f busybox.yaml

  548  ls

  549  cp ~/.kube/k8-fccollect.yaml .

  550  kubectl –kubeconfig=”k8-fccollect.yaml” create -f busybox.yaml

  551  sudo vi busybox.yaml

  552  kubectl –kubeconfig=”k8-fccollect.yaml” create -f busybox.yaml

  553  kubectl –kubeconfig=”k8-fccollect.yaml” get s

  554  kubectl –kubeconfig=”k8-fccollect.yaml” get s [TAB]

  555  history

Newyork:k8s esumit$

https://stackoverflow.com/questions/50838141/how-can-we-create-service-dependencies-using-kubernetes

https://stackoverflow.com/questions/49368047/what-is-the-equivalent-for-depends-on-in-kubernetes

https://stackoverflow.com/questions/54928861/helm-wait-till-dependency-deployment-are-ready-on-kubernetes

Typically you don’t; you just let Helm (or kubectl apply -f) start everything in one shot and let it retry starting everything.

The most common pattern is for a container process to simply crash at startup if an external service isn’t available; the Kubernetes Pod mechanism will restart the container when this happens. If the dependency never comes up you’ll be stuck in CrashLoopBackOff state forever, but if it appears in 5-10 seconds then everything will come up normally within a minute or two.

Also remember that pods of any sort are fairly disposable in Kubernetes. IME if something isn’t working in a service one of the first things to try is kubectl delete pod and letting a Deployment controller recreate it. Kubernetes can do this on its own too, for example if it decides it needs to relocate a pod on to a different node. That is: even if some dependency is up when your pod first start sup, there’s no guarantee it will stay up forever.

Pods communication across nodes

If the services are in the same namespace you can use http://servicename:port

If they are indifferent namespaces then you can use FQDN http://servicename.namespace.svc.cluster.local:port

 

By default, pods can communicate with each other by their IP address, regardless of the namespace they’re in.

You can see the IP address of each pod with:

kubectl get pods -o wide --all-namespaces

However, the normal way to communicate within a cluster is through Service resources.

https://stackoverflow.com/questions/45720084/how-to-make-two-kubernetes-services-talk-to-each-other

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#my-service-is-missing-endpoints

A Service also has an IP address and additionally a DNS name. A Service is backed by a set of pods. The Service forwards requests to itself to one of the backing pods.

The fully qualified DNS name of a Service is:

<service-name>.<service-namespace>.svc.cluster.local

This can be resolved to the IP address of the Service from anywhere in the cluster (regardless of namespace).

For example, if you have:

  • Namespace ns-a: Service svc-a → set of pods A
  • Namespace ns-b: Service svc-b → set of pods B

Then a pod of set A can reach a pod of set B by making a request to:

svc-b.ns-b.svc.cluster.local

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s