本篇內(nèi)容主要講解“kubernetes與CNI Plugin的集成方法是什么”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實(shí)用性強(qiáng)。下面就讓小編來帶大家學(xué)習(xí)“kubernetes與CNI Plugin的集成方法是什么”吧!
成都創(chuàng)新互聯(lián)成立于2013年,我們提供高端網(wǎng)站建設(shè)、成都網(wǎng)站制作、成都網(wǎng)站設(shè)計(jì)公司、網(wǎng)站定制、成都全網(wǎng)營銷推廣、小程序設(shè)計(jì)、微信公眾號開發(fā)、seo優(yōu)化排名服務(wù),提供專業(yè)營銷思路、內(nèi)容策劃、視覺設(shè)計(jì)、程序開發(fā)來完成項(xiàng)目落地,為VR全景企業(yè)提供源源不斷的流量和訂單咨詢。
cni項(xiàng)目提供了golang寫的一個(gè)library,定義了集成cni插件的應(yīng)用需調(diào)用的cni plugin接口,它就是libcni。其對應(yīng)的Interface定義如下:
libcni/api.go:51 type CNI interface { AddNetworkList(net *NetworkConfigList, rt *RuntimeConf) (types.Result, error) DelNetworkList(net *NetworkConfigList, rt *RuntimeConf) error AddNetwork(net *NetworkConfig, rt *RuntimeConf) (types.Result, error) DelNetwork(net *NetworkConfig, rt *RuntimeConf) error }
kubelet Run方法方法中會最終調(diào)用syncLoopIteration函數(shù),由它通過各種channel對pod進(jìn)行sync。
pkg/kubelet/kubelet.go:1794 // syncLoopIteration reads from various channels and dispatches pods to the // given handler. // // Arguments: // 1. configCh: a channel to read config events from // 2. handler: the SyncHandler to dispatch pods to // 3. syncCh: a channel to read periodic sync events from // 4. houseKeepingCh: a channel to read housekeeping events from // 5. plegCh: a channel to read PLEG updates from // // Events are also read from the kubelet liveness manager's update channel. // // The workflow is to read from one of the channels, handle that event, and // update the timestamp in the sync loop monitor. // // Here is an appropriate place to note that despite the syntactical // similarity to the switch statement, the case statements in a select are // evaluated in a pseudorandom order if there are multiple channels ready to // read from when the select is evaluated. In other words, case statements // are evaluated in random order, and you can not assume that the case // statements evaluate in order if multiple channels have events. // // With that in mind, in truly no particular order, the different channels // are handled as follows: // // * configCh: dispatch the pods for the config change to the appropriate // handler callback for the event type // * plegCh: update the runtime cache; sync pod // * syncCh: sync all pods waiting for sync // * houseKeepingCh: trigger cleanup of pods // * liveness manager: sync pods that have failed or in which one or more // containers have failed liveness checks func (kl *Kubelet) syncLoopIteration(configCh <-chan kubetypes.PodUpdate, handler SyncHandler, syncCh <-chan time.Time, housekeepingCh <-chan time.Time, plegCh <-chan *pleg.PodLifecycleEvent) bool { kl.syncLoopMonitor.Store(kl.clock.Now()) select { case u, open := <-configCh: // Update from a config source; dispatch it to the right handler // callback. if !open { glog.Errorf("Update channel is closed. Exiting the sync loop.") return false } switch u.Op { case kubetypes.ADD: glog.V(2).Infof("SyncLoop (ADD, %q): %q", u.Source, format.Pods(u.Pods)) // After restarting, kubelet will get all existing pods through // ADD as if they are new pods. These pods will then go through the // admission process and *may* be rejected. This can be resolved // once we have checkpointing. handler.HandlePodAdditions(u.Pods) case kubetypes.UPDATE: glog.V(2).Infof("SyncLoop (UPDATE, %q): %q", u.Source, format.PodsWithDeletiontimestamps(u.Pods)) handler.HandlePodUpdates(u.Pods) case kubetypes.REMOVE: glog.V(2).Infof("SyncLoop (REMOVE, %q): %q", u.Source, format.Pods(u.Pods)) handler.HandlePodRemoves(u.Pods) case kubetypes.RECONCILE: glog.V(4).Infof("SyncLoop (RECONCILE, %q): %q", u.Source, format.Pods(u.Pods)) handler.HandlePodReconcile(u.Pods) case kubetypes.DELETE: glog.V(2).Infof("SyncLoop (DELETE, %q): %q", u.Source, format.Pods(u.Pods)) // DELETE is treated as a UPDATE because of graceful deletion. handler.HandlePodUpdates(u.Pods) case kubetypes.SET: // TODO: Do we want to support this? glog.Errorf("Kubelet does not support snapshot update") } // Mark the source ready after receiving at least one update from the // source. Once all the sources are marked ready, various cleanup // routines will start reclaiming resources. It is important that this // takes place only after kubelet calls the update handler to process // the update to ensure the internal pod cache is up-to-date. kl.sourcesReady.AddSource(u.Source) case e := <-plegCh: if isSyncPodWorthy(e) { // PLEG event for a pod; sync it. if pod, ok := kl.podManager.GetPodByUID(e.ID); ok { glog.V(2).Infof("SyncLoop (PLEG): %q, event: %#v", format.Pod(pod), e) handler.HandlePodSyncs([]*v1.Pod{pod}) } else { // If the pod no longer exists, ignore the event. glog.V(4).Infof("SyncLoop (PLEG): ignore irrelevant event: %#v", e) } } if e.Type == pleg.ContainerDied { if containerID, ok := e.Data.(string); ok { kl.cleanUpContainersInPod(e.ID, containerID) } } case <-syncCh: // Sync pods waiting for sync podsToSync := kl.getPodsToSync() if len(podsToSync) == 0 { break } glog.V(4).Infof("SyncLoop (SYNC): %d pods; %s", len(podsToSync), format.Pods(podsToSync)) kl.HandlePodSyncs(podsToSync) case update := <-kl.livenessManager.Updates(): if update.Result == proberesults.Failure { // The liveness manager detected a failure; sync the pod. // We should not use the pod from livenessManager, because it is never updated after // initialization. pod, ok := kl.podManager.GetPodByUID(update.PodUID) if !ok { // If the pod no longer exists, ignore the update. glog.V(4).Infof("SyncLoop (container unhealthy): ignore irrelevant update: %#v", update) break } glog.V(1).Infof("SyncLoop (container unhealthy): %q", format.Pod(pod)) handler.HandlePodSyncs([]*v1.Pod{pod}) } case <-housekeepingCh: if !kl.sourcesReady.AllReady() { // If the sources aren't ready or volume manager has not yet synced the states, // skip housekeeping, as we may accidentally delete pods from unready sources. glog.V(4).Infof("SyncLoop (housekeeping, skipped): sources aren't ready yet.") } else { glog.V(4).Infof("SyncLoop (housekeeping)") if err := handler.HandlePodCleanups(); err != nil { glog.Errorf("Failed cleaning pods: %v", err) } } } kl.syncLoopMonitor.Store(kl.clock.Now()) return true }
說明:
HandlePodSyncs, HandlePodUpdates, HandlePodAdditions最終都是invoke dispatchWork來分發(fā)pods到podWorker進(jìn)行異步的pod sync。
HandlePodRemoves調(diào)用一下接口,將pod從cache中刪除,kill pod中進(jìn)程,并 stop Pod的Probe Workers,最終通過捕獲Pod的PLEG Event,通過cleanUpContainersInPod來清理Pod。 pkg/kubelet/kubelet.go:1994 kl.podManager.DeletePod(pod); kl.deletePod(pod); kl.probeManager.RemovePod(pod);
HandlePodReconcile中,如果Pod是通過Eviction導(dǎo)致的Failed,則調(diào)用kl.containerDeletor.deleteContainersInPod來清除Pod內(nèi)的容器。
Kubelet.dispatchWork最終會invoke podWokers.managePodLoop,podWorkers會嗲用NewMainKubelet時(shí)給PodWorkers注冊的syncPodFn= (kl *Kubelet) syncPod(o syncPodOptions)。
Kubelet.syncPod會根據(jù)runtime類型進(jìn)行區(qū)分,我們只看runtime為docker的情況,會invoke DockerManager.SyncPod。
DockerManager.SyncPod會dm.network.SetUpPod,然后根據(jù)network plugin類型進(jìn)行區(qū)分,我們只看cni plugin,會對應(yīng)invoke cniNetworkPlugin.SetUpPod進(jìn)行網(wǎng)絡(luò)設(shè)置。
cniNetworkPlugin.SetUpPod invoke cniNetwork.addToNetwork,由后者最終調(diào)用CNIConfig.AddNetwork,這就是libcni中對應(yīng)的AddNetwork Interface。
CNIConfig.AddNetwork通過封裝好的execPlugin由系統(tǒng)去調(diào)用cni plugin bin,到此就完成了pod內(nèi)的網(wǎng)絡(luò)設(shè)置。
都是通過invoke podContainerDeleter.deleteContainerInPod來清理容器。
對于docker,deleteContainerInPod會調(diào)用DockerManager.delteContainer。
在deleteContainer時(shí),通過invoke containerGC.netContainerCleanup進(jìn)行容器的網(wǎng)絡(luò)環(huán)境清理。
然后由PluginManger.TearDownPod去調(diào)用cniNetworkPlugin.TearDownPod,再執(zhí)行cniNetwork.deleteFromNetwork。
cniNetwork.deleteFromNetwork會調(diào)用CNIConfig.DelNetwork,這就是libcni中對應(yīng)的DelNetwork Interface。
CNIConfig.AddNetwork通過封裝好的execPlugin由系統(tǒng)去調(diào)用cni plugin bin,到此就完成了pod內(nèi)的網(wǎng)絡(luò)清理。
到此,相信大家對“kubernetes與CNI Plugin的集成方法是什么”有了更深的了解,不妨來實(shí)際操作一番吧!這里是創(chuàng)新互聯(lián)網(wǎng)站,更多相關(guān)內(nèi)容可以進(jìn)入相關(guān)頻道進(jìn)行查詢,關(guān)注我們,繼續(xù)學(xué)習(xí)!