Cilium v1.6 DocumentationCREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID test-cluster ng-25560078 2019-07-23T06:05:35Z 0 2 0 Values=hvm" "Name=name,Values=CoreOS- stable*" --query 'sort_by(Images,&CreationDate)[-1].{id:ImageLocation}' { "id": "595879546273/CoreOS-stable-1745.5.0-hvm" } Creating a Cluster Note that you will infrastructure. Configure AWS credentials Export the variables for your AWS credentials export AWS_ACCESS_KEY_ID="www" export AWS_SECRET_ACCESS_KEY ="xxx" export AWS_SSH_KEY_NAME="yyy" export AWS_DEFAULT_REGION="zzz"0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.5 Documentationcommand below. aws ec2 describe-images --region=us-west-2 --owner=595879546273 --filters { "id": "595879546273/CoreOS-stable-1745.5.0-hvm" } Creating a Cluster Note that you will need to specify infrastructure. Configure AWS credentials Export the variables for your AWS creden�als export AWS_ACCESS_KEY_ID="www" export AWS_SECRET_ACCESS_KEY ="xxx" export AWS_SSH_KEY_NAME="yyy" export AWS_DEFAULT_REGION="zzz" ‘cc_door_client’ with the name of the gRPC method to call, and any parameters (in this case, the door-id): $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetName Door name is: Spaceport0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.7 Documentationfrom Install Cilium. However we’re enabling managed etcd and setting both cluster-name and cluster-id for each cluster. Make sure context is set to kind-cluster2 cluster. kubectl config use-context kind-cluster2 global.identityAllocationMode=kvstore \ --set global.cluster.name=cluster2 \ --set global.cluster.id=2 Change the kubectl context to kind-cluster1 cluster: kubectl config use-context kind-cluster1 global.identityAllocationMode=kvstore \ --set global.cluster.name=cluster1 \ --set global.cluster.id=1 Setting up Cluster Mesh We can complete setup by following the Cluster Mesh guide with Expose the0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.10 Documentationinstallation with minimal privileges over the AKS node resource group: AZURE_SUBSCRIPTION_ID=$(az account show --query "id" --output tsv) AZURE_NODE_RESOURCE_GROUP=$(az aks show --resource-group ${RESOURCE_GROUP} IPTION_ID}/resourceGroups/${AZURE_NODE_RES OURCE_GROUP} --role Contributor --output json --only-show-errors) AZURE_TENANT_ID=$(echo ${AZURE_SERVICE_PRINCIPAL} | jq -r '.tenant') AZURE_CLIENT_ID=$(echo GROUP \ --set azure.subscriptionID=$AZURE_SUBSCRIPTION_ID \ --set azure.tenantID=$AZURE_TENANT_ID \ --set azure.clientID=$AZURE_CLIENT_ID \ --set azure.clientSecret=$AZURE_CLIENT_SECRET \ --set0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.9 Documentationfrom Install Cilium. However we’re enabling managed etcd and setting both cluster-name and cluster-id for each cluster. Make sure context is set to kind-cluster2 cluster. kubectl config use-context kind-cluster2 managed=true \ --set identityAllocationMode=kvstore \ --set cluster.name=cluster2 \ --set cluster.id=2 Change the kubectl context to kind-cluster1 cluster: kubectl config use-context kind-cluster1 managed=true \ --set identityAllocationMode=kvstore \ --set cluster.name=cluster1 \ --set cluster.id=1 Setting up Cluster Mesh We can complete setup by following the Cluster Mesh guide with Expose the0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.8 Documentationfrom Install Cilium. However we’re enabling managed etcd and setting both cluster-name and cluster-id for each cluster. Make sure context is set to kind-cluster2 cluster. kubectl config use-context kind-cluster2 global.identityAllocationMode=kvstore \ --set global.cluster.name=cluster2 \ --set global.cluster.id=2 Change the kubectl context to kind-cluster1 cluster: kubectl config use-context kind-cluster1 global.identityAllocationMode=kvstore \ --set global.cluster.name=cluster1 \ --set global.cluster.id=1 Setting up Cluster Mesh We can complete setup by following the Cluster Mesh guide with Expose the0 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.11 Documentationinstallation with minimal privileges over the AKS node resource group: AZURE_SUBSCRIPTION_ID=$(az account show --query "id" --output tsv) AZURE_NODE_RESOURCE_GROUP=$(az aks show --resource-group ${RESOURCE_GROUP} IPTION_ID}/resourceGroups/${AZURE_NODE_RES OURCE_GROUP} --role Contributor --output json --only-show-errors) AZURE_TENANT_ID=$(echo ${AZURE_SERVICE_PRINCIPAL} | jq -r '.tenant') AZURE_CLIENT_ID=$(echo GROUP \ --set azure.subscriptionID=$AZURE_SUBSCRIPTION_ID \ --set azure.tenantID=$AZURE_TENANT_ID \ --set azure.clientID=$AZURE_CLIENT_ID \ --set azure.clientSecret=$AZURE_CLIENT_SECRET \ --set0 码力 | 1373 页 | 19.37 MB | 1 年前3
Steering connections to sockets with BPF socket lookup hookbtf_id 32 build the prog load & pin the prog Pin BPF maps used by echo_dispatch # mount -t bpf none ~vagrant/bpffs # sudo chown vagrant.vagrant ~vagrant/bpffs # bpftool map show id 28 28: hash bpftool map pin id 28 ~vagrant/bpffs/echo_ports # bpftool map show id 29 29: sockmap name echo_socket flags 0x0 key 4B value 8B max_entries 1 memlock 4096B # bpftool map pin id 29 ~vagrant/bpffs/echo_socket0 码力 | 23 页 | 441.22 KB | 1 年前3
Hardware Breakpoint implementation in BCC#includestruct stack_key_t { int pid; char name[16]; int user_stack_id; int kernel_stack_id; }; BPF_STACK_TRACE(stack_traces, 16384); BPF_HASH(counts, struct stack_key_t, uint64_t); 32; bpf_get_current_comm(&key.name, sizeof(key.name)); key.kernel_stack_id = stack_traces.get_stackid(ctx, 0); key.user_stack_id = stack_traces.get_stackid(ctx, BPF_F_USER_STACK); u64 zero = 0, 0 码力 | 8 页 | 2.02 MB | 1 年前3
eBPF at LINE's Private Cloudskb_csum_hwoffload_help (len: 5764 gso_type: tcpv4) Functions the packets have gone through CPU ID Time Stamp User defined tracing data (with Lua script) … Use case • Multi tenant HV networking0 码力 | 12 页 | 1.05 MB | 1 年前3
共 10 条
- 1













