Wednesday, June 7, 2017

Notes From Docker Swarm and Kubernetes

Lecture by Jayesh Nazre
Notes Transcribed by Paul Fischer

Containerization vs virtualization 

Docker terms
Client docker build
Docker host

Daemon to images or daemon toregistry to images
To containers

ISO files into drive can be installed, an older form of images
Images need to be stored somewhere, obviously some repository out there
Dockerhub or Google repository are out there, you can make your own as well, in a similar fashion to Git Hub

You do not want those images for federal or state projects to be out in the public
Explanation of the images shortly, but for now think of those as images
.ami docker calls these images

Using two ways of launching the container from the image, running instance of image is what is meant by container
Apache web server can be in container, with application inside it such as word press, create multiple instances of those and create cluster of those creation of say a cluster of web applications
eg. 3 web servers can run the web code and you can create an image of those and tell docker to create three ‘replicas,’ thus if one of the instances fail, docker tax can tell the image to try other instance

DockerSwarm - Linux monopoly until the last two or three years
Three tier architecture with a docker swarm application data and ???? Layers
App layer

Manager to worker
Manager to host 11
Subnet1 communicates between docker layer container between manager and worker
App data and other layer also communicate through similarly formulated subnets etc…

Docker Swarm
Allows a chain of managers and workers this gives a conceptual view of dockers the daemon layer exists between the manager eliminating the mhierarchical problems while retaining the capability of the system to maintain scaling or addition of more managers and workers

Instances serve the docker swarm through the containers

Q: can a host hold more than one role?
A: No behind the scene this host is the via and the docker or unix daemon. Most of the tech supported by docker is linux based, java is more simple to use and is natively supported. A three tier architecture and shebang running on the server is more fun than the desktop

Q: Can you pick a leader to be the manager?
A: another can be substituted for a former leader and that is typically what happens

Docker compost file allows a dialogue between  various clusters with one lines commands

Services vs Tasks vs Containers

3 nginx replicas (service [swarm manager[ branches into three instances of the abstractionnginx.1 2 and 3
These are worker nodes

In this manner if one of the containers fail then the swarm manager will reallocate the worker load to the other nodes that are available

In production there can no docker containers in the manager node if possible
So this previous image of the docker swarm formulated must be amended to move the docker containers to the worker nodes and allows the entire host to spawn on individual nodes

Docker network types

Br0 network names cape branches into vets and VTEP the VTEP:4789/udp communicates through the VXLAN tunnel to an identical branch under node 2 at a different IP address
Together this constitutes a layer 3 IP transport network
The tunnel is allowed to be created by the docker network type
Layer 3 IP. Transport network should be thought of as the physical infrastructure through the mountain between two nodes, but a VXLAN tunnel allows communication like a tra between the two
In the end in a nutshell you get packages of information between the two nodes
Packets get moved from. One package to another package, the VXLAN tunnel is a well established concept but there are other open source drivers and options that are available 

Docker provides the network and allows multiple nodes, not just two but even three four etc to communicates

Q is the VXLAN traffic encrypted
It does not have the capability of encryption alone, but if the network has its own encryption then the information is safe
If someone uses sniffer or other software on layer 3 IP transport networks. The VXLAN tunnel will not be sen directlys

Docker compose v3
Version:’3 ‘
image: myapache:10
replicas: 2
image: mytomcate:10
replicas: 2
JDBC)CONNECTION _STRING=jdbc:mysql://mystack1_mysql:3306/web)customer_tracker?useSSL-false
image: mylocalsql:10
replicas: 1
- subnet:
driver: default
- subnet:
drier: overlay
driver: default
- sunset:

You do not really need a 3d architecture for the functionally hacker-proof  by allowing a hack of the web server instead of the database server in the event of an offensive action
The reason for the logical reference seen in my tomcat
Iso called images published by apache created by a container on the machine was deployed and a custom image was deployed called my tomcat which can then be used to spawn multiple images.

The logical reference was in mystack1 as the name of the cluster when it is deployed appserver must be the same throughout allowing a logical reference to the server

Talking to six through the communication
In mysql there is only oe replica being referenced, but in order to be logical the environment must be referenced
The image can be uploaded to the cloud or manually loaded into your cluster

One replica because there are some things which must be taken into consideration
It is difficult to have a system that relies on a container for a replica, recreation of two systems
One will write information into the container

The two are not synchronized this resynchronization with multiple replicas will require an FS locally, but one replica is recommended for for local work with a database

You can do aamysql dump to basically allow anything to occur out of the box, through shipping or with the launch event of the container you go out on the network share and pick  up the instance later data file pick. Up, for performance reasons.
With a large database you may experience performance problems with instance loading. There is no need to create docker clusters with the system that is used, and the networking is done for you

If you do want to do it in amazon with out using iOS you can stope your transaction and your data information  and somehow restore it when it dies

If I delete my container I will have to launch whatever data I had in order to have this returned

File systems for schedule errors, you can define a volume

Q launch data and the entry point for the data
Yes, that is there you can put that under networks my. Webnw and driver overlay

Everything must be written in the docker compose file described above

Stateless architecture and micro services coming together to deliver something, designation of a solution that is monotony is not what is wanted, so the databases  must be merged and converged
You do not want this to be in the logical reasoning described above. 
The end result can be incrementally added up to get to the server

When you have a big file you may want to slice it and dice it.
There is only one replica in the example, but three networks are provided using the version before
When the network is created externally I get a copy of the network as well, the program is self cleaning, so when I take down my network the entire network goes down… hence a one-command take-down of the system is possible or conceivable.

You can only do things through the master node, administratively, there is no control in the worker nodes
To log in the master:
root@ubuntu: `#docker node ls
Return of the list of nodes in the closter, these are physical nodes with status availability and the manager status, which returns leader when it is selected as the managing node
So I could have two web servers two app servers, and one SQL

~# docker stack deploy -C /mysoftwares/mydocker/myfinal.yml mystack1
Deploys the file with a logical reference to the Docker Composition provided above, you can use some scripting to make this dynamic as well
So I could have two web servers two app servers, and one SQL >>>> these should now be servecies and networks which have been created appropriately.

~# docker ps
Will give the master status of the system ad the log of when the containers were created at which ports

~$ sudo -s
Will test the system in a sudo system

“Portainer” can be used to create a background backdrop or a graphical interface such as images provided by different providers to deploy in the cluster. All that is done here can be deployed on the command prompt, which may be provided if there is time, which is unlikely

The information is being accessed through the tomcat on the java and attaining the data through mysql database.

Log files, SSL keys etc. how are these injected or pulled out of the containers
An easy ay to create an image of what you want, and you can do what you want with your baseline, pretty much a unix box, multiple applications web app and data and other systems in place, if you do that your container will die. The recommend to run processes through separate credits Mongoldb these systems will be something along these lines. Think of this container as something that you threw away, you will not try to figure out what is wrong with it it or anything, you will just throw it away and create a new container regularly. Everything done with the unix box will be possible within this program, but you can sell into the container…

If you are new to containerization, docker is strongly recommended before going to kubernetes

Host1 master node
Communicates through the API server to hosts 2 and three the worker nodes
This was a contribution from google used to spawn between 40k and 100k containers, contributed to the opensourec community, so many have moved on to kubernetes
In this case it relies on dockers, but can be used to rely on any other container system
The same architecture that is shown here was seen before with the master and worker nodes and master/manager
The difference is the scalable CNI plug-ins Comuter network interface plugins are somewhat alaguous to the tunnel described above, different open source team s such as flannel or calico, create an open source container to create the magic of containers within Pods allowing all of the different functions, from the node proxies to the docker engines to communicate in an interconnected fashion
The Pod Concept
the system must abstract the container that thehdocker is running from its managers
so the master node does not manage the containers they  manage pod. This addresses the hierarchy problem described in the Docker swarm system described above…’
This actually predates docker swarm, and some services have been borrowed between the two
Abstraction of parts, there is no container, there is now the handling of pods

Q: If either of these can containers be migrated across hosts?
A: Yes all of the capabilities described above remain in Kubernetes

Recommend production of 3 systems if possible in a nutshell to create all of the boxes API server, container manager, LCD, such to be easily accessible

The easiest way to instal kubernetes into your laptop is using qinikube
Another option is kubeadm
this allows multiple aDm to be in a cluster, while qinikube allows one admin and to play with the concepts of the swarm

Allowing you a graphical way to instal the kubernetes cluster

For a company the best bet is hosted

Options o fate graphical interface allow the cluster on multiple providers
Google has the system allowing how many masters and how many workers are necessitated in order to maximize efficiency

Amazon EC2 container service dis not related to kubernetes, but for docker but works better wtihAWS and should be used for those experimenting with that.

You can build all of the earlier systems

Kubernetes - sample app (deployment)

apiVersion: extensions/v1beta1
Kind:  Deployment
replicas: 2
app: mywebapp
-specname : 

Kind: Service
name mywebappservice
- port: 80
protocol: TP
app: mywebapp

Master talks dialogue with hipster that has dialogue with storage backend, Kublt cAdvisor on connected nodes as well as the containing node

Graphical version of managing your cluster exists in both, but this does not ave to be done in command prompts

Q: are there advantages, which can be used for dockers forms today?
A: unless it is a cost concern I would d not recommend it for production, but for rfinished products, the costs of VMs if you have an old provisioned instance then this could be used

Use a docker paid center and it would be a very large charge, and to get into the infrastructural awareness

Use it for the tear down, to integrate with Jenkins or other ALM extant.

~$ minkube status
~$ minikube start
~$ kubectl get pods — output = wide
Will show you the pods, the restarts, ages IP and status

~$ minikube dashboard