K8S安装部署开放服务name: traefik-dashboard-route spec: entryPoints: - web routes: - match: Host(`traefik-dashboard.xxx.com`) kind: Rule services: - name: traefik name: kubernetes-dashboard-tls spec: entryPoints: - websecure routes: - match: Host(`k8s-dashboard.xxx.com`) kind: Rule services: - name: kubernetes-dashboard ceph-dashboard-tls namespace: rook-ceph spec: entryPoints: - websecure routes: - match: Host(`ceph-dashboard.xxx.com `) kind: Rule services: - name: rook-ceph-mgr-dashboard0 码力 | 54 页 | 1.23 MB | 1 年前3
涂小刚-基于k8s的微服务实践离不同环境业务,通过特定标识来识别业务线。 k8s-service k8s-dns注册服务名,通过配置文件关键字关联业务线应用名称,保持应用和k8s之间的关联。 k8s-app-name 容器host应用名称,deployment 名,通过配置文件关键字关联业务线应用名称,保持应用和k8s之间的关联。 规范 范例 应用名称 ai-dc-server ai-dc-web ai-dc-api Flannel负责在容器集群中的多个节点之间提供第3层IPv4网 络。 工作模式: 1.vxlan 通过封装协议解包收发包mtu1450,vxlan可以在分 布多个网段的主机间构建2层虚拟网络 。 2.host-gw 通过宿主机路由同步收发包,必需工作在二层。 1.系统启动,flanneld下发docker子网配置,docker启动获 取子网配置生成docker0 生成gateway网卡; 2.no tcp延迟:calico-bgphost host-gw calico-ipip>flannel_host-gw>calico-bgp>host 带宽:host>calico-bgp>flannel_host-gw>flannel-vxlan>calico-ipip host:指物理机直连网络 calic 0 码力 | 19 页 | 1.34 MB | 1 年前3
Kubernetes开源书 - 周立此时,会看到 inventory/mycluster/host.ini ⽂件内容类似如下: [k8s-cluster:children] 03-使⽤Kubespray部署⽣产可⽤的Kubernetes集群(1.11.2) 12 kube-master kube-node [all] node1 ansible_host=172.20.0.88 ip=172.20 0.88 node2 ansible_host=172.20.0.89 ip=172.20.0.89 node3 ansible_host=172.20.0.90 ip=172.20.0.90 node4 ansible_host=172.20.0.91 ip=172.20.0.91 node5 ansible_host=172.20.0.92 ip=172.20.0 io/google_containers/liveness livenessProbe: httpGet: # when "host" is not defined, "PodIP" will be used # host: my-host # when "scheme" is not defined, "HTTP" scheme will be used0 码力 | 135 页 | 21.02 MB | 1 年前3
k8s操作手册 2.3mtu 65536 qdisc noqueue state UNKNOWN # 本地环回口 inet 127.0.0.1/8 scope host lo 2: ens33: mtu 1500 qdisc pfifo_fast state UP #k8s服务器本身用于通信的网口 ver在重 启期间不可用 The connec�on to the server 10.99.1.245:6443 was refused - did you specify the right host or port? 等待几分钟恢复 对于旧的svc,端口范围不在刚刚设置的范围内,也是生效的,新创建的svc端口 范围必须在设置的范围内 ★创建ingress ①部署ingress控制器 ress控制器可基于某ingress 资源定义的规则将客户端的请求流量直接转发至与Service对应的后端pod资源 上,绕过service直接转发到真实pod上。Ingress资源是基于h�p的host名或url的 转发规则 k8s-ingress-nginx官网地址 h�ps://kubernetes.github.io/ingress-nginx/deploy/ # wget h�ps://raw 0 码力 | 126 页 | 4.33 MB | 1 年前3
Автоматизация управления ClickHouse-кластерами в Kubernetes"ClickHouseInstallation" metadata: name: "demo-01" spec: configuration: zookeeper: nodes: - host: zookeeper-0.zookeepers.zoo1ns clusters: - name: "demo-01" layout: shardsCount: Можно собрать желаемую конфигурацию ClickHousehost>:: host>host>0.0.0.0 host>1 0 码力 | 44 页 | 2.24 MB | 1 年前3
QCon北京2018/QCon北京2018-《Kubernetes-+面向未来的开发和部署》-Michael+ChenApp 3 Bins/Libs Container Engine Docker Host What is Kubernetes? 13 OS App 1 Bins/Libs App 2 Bins/Libs App 3 Bins/Libs Container Engine Docker Host Kubernetes Slave OS App 1 Bins/Libs App 3 Bins/Libs Container Engine Docker Host Kubernetes Slave OS App 1 Bins/Libs App 2 Bins/Libs App 3 Bins/Libs Container Engine Docker Host Kubernetes Slave Kubernetes Master P1R3 P2R2 Container Cluster = “Desired State Management” – Kubernetes Cluster Services (w/API) • Node = Container Host w/agent called “Kubelet” • Application Deployment File = Configuration File of desired state • Container0 码力 | 42 页 | 10.97 MB | 1 年前3
QCon北京2017/智能化运维/Self Hosted Infrastructure:以自动运维 Kubernetes 为例components ● Core components deployed as native API objects Self-hosted k8s Architecture Why Self-host Kubernetes? ● Operational expertise around app management in k8s extends to k8s itself ○ E.g. scaling Kubernetes directly translate to improvements in managing Kubernetes Simplify Node Bootstrap On-host requirements become: ● Kubelet ● Container Runtime (docker, rkt, …) Any Distro Node Bootstrap ● Install container runtime ○ $pkgmanager install [docker|rkt] ● Write kubeconfig ○ scp kubeconfig user@host:/etc/kubernetes/kubeconfig ● Start kubelet ○ Systemctl start kubelet $ kubectl apply -f kube-apiserver0 码力 | 73 页 | 1.58 MB | 1 年前3
vmware组Kubernetes on vSphere Deep Dive KubeCon China VMware SIGsystem’s memory as possible. Where does this lead? Node 0 32GB Node 1 21GB 2 CPU Nodes – NUMA host When Linux initially allocates a threads, it is assigned a preferred node, by default memory allocations B (Worker) VM K8S Prod (Master) (Worker) VM K8S Prod (Worker) (Worker) (VM Anti-Affinity) Host-VM Rules 24 Extending Kubernetes with vSphere HA What is HA? A least 2 hypervisor hosts are required an HA cluster are health monitored and in the event of a failure, the virtual machines on a failed host are restarted on alternate hosts. When running on hardware that supports health reporting, Pro-active0 码力 | 25 页 | 2.22 MB | 1 年前3
VMware SIG Deep Dive into Kubernetes Schedulingsystem’s memory as possible. Where does this lead? Node 0 32GB Node 1 21GB 2 CPU Nodes – NUMA host When Linux initially allocates a threads, it is assigned a preferred node, by default memory allocations B (Worker) VM K8S Prod (Master) (Worker) VM K8S Prod (Worker) (Worker) (VM Anti-Affinity) Host-VM Rules 25 Extending Kubernetes with vSphere HA What is HA? A least 2 hypervisor hosts are required an HA cluster are health monitored and in the event of a failure, the virtual machines on a failed host are restarted on alternate hosts. When running on hardware that supports health reporting, Pro-active0 码力 | 28 页 | 1.85 MB | 1 年前3
腾讯基于 Kubernetes 的企业级容器云实践-罗韩梅TCP_RR(r/s) TCP_CRR(r/s) Overlay方案性能 host vxlan ipip gateway 23540 8368 22127 7675 21007 7231 0 5000 10000 15000 20000 25000 TCP_RR(64) TCP_CRR(64) Underlay方案性能 Host Bridge NAT IPIP+Gateway混合Overlay方案 y方案 •短链接IPIP包量比Vxlan多14.1% •Gateway比Vxlan多40.5% •方案被Flannel社区合并 Underlay方案 • Bridge方式仅比Host差6%,一般 overlay比Host差20~40% • SRIOV方式比Bridge CPU下降38.3%, 包量+6% 性能 Docker、Docket、Gaiastack P2P Agent下载镜像对比0 码力 | 28 页 | 3.92 MB | 1 年前3
共 24 条
- 1
- 2
- 3













