AWS LAMBDA Tutorial............................................................................................ 30 Role creation in AWS Console ......................................................................... ....................................................................................... 125 IAM Role ................................................................................................. ................................................................................ 217 Create IAM role for permission ..................................................................................0 码力 | 393 页 | 13.45 MB | 1 年前3
 Apache Karaf Container 4.x - Documentation14.4. Available realm and login modules 5.14.5. Encryption service 5.14.6. Role discovery policies 5.14.7. Default role policies 5.15. Troubleshooting, Debugging, Profiling, and Monitoring 5.15.1 Security: Apache Karaf provides a complete security framework (based on JAAS), and provides a RBAC (Role-Based Access Control) mechanism for console and JMX access. • Instances: multiple instances of Apache feature:stop jaas:group-create jaas:group-add jaas:group-delete jaas:group-list jaas:group-role-add jaas:group-role-delete jaas:su jaas:sudo shell:edit shell:env Apache Karaf 4.x shell:less shell:stack-traces-print0 码力 | 370 页 | 1.03 MB | 1 年前3
 Rancher Hardening Guide Rancher v2.1.xencryption configuration file on each of the RKE nodes that will be provisioned with the controlplane role: Rationale This configuration file will ensure that the Rancher RKE cluster encrypts secrets at base64 -i - touch /etc/kubernetes/encryption.yaml Set the file ownership to root:root and the permissions to 0600 chown root:root /etc/kubernetes/encryption.yaml chmod 0600 /etc/kubernetes/encryption On nodes with the controlplane role: Generate an empty configuration file: touch /etc/kubernetes/audit.yaml Set the file ownership to root:root and the permissions to 0600 chown root:root /etc/kubernetes/audit0 码力 | 24 页 | 336.27 KB | 1 年前3
 Apache Karaf 3.0.5 GuidesSecurity: Apache Karaf provides a complete security framework (based on JAAS), and providing RBAC (Role-Based Access Control) mechanism for console and JMX. • Instances: multiple instances of Apache Karaf OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # 34 INTEGRATION IN THE OPERATING SYSTEM: THE SERVICE WRAPPER ProtectionDomain ProtectionDomain null nulljava.security.Permissions@6521c24e ( ("java.security.AllPermission" " permissions>" " ") ) Signers null 46 USING THE CONSOLE 0 码力 | 203 页 | 534.36 KB | 1 年前3
 Rancher Hardening Guide v2.3.5installing RKE. The uid and gid for the etcd user will be used in the RKE config.yml to set the proper permissions for files and directories during installation time. create etcd user and group To create the file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions. #!/bin/bash -e for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata to_all_ns.sh. Be sure to chmod +x apply_networkPolicy_to_all_ns.sh so the script has execute permissions. #!/bin/bash -e for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata0 码力 | 21 页 | 191.56 KB | 1 年前3
 Rancher Hardening Guide v2.4installing RKE. The uid and gid for the etcd user will be used in the RKE config.yml to set the proper permissions for files and directories during installation time. create etcd user and group To create the file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions. #!/bin/bash -e for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata to_all_ns.sh. Be sure to chmod +x apply_networkPolicy_to_all_ns.sh so the script has execute permissions. Hardening Guide v2.4 6 #!/bin/bash -e for namespace in $(kubectl get namespaces -A -o json0 码力 | 22 页 | 197.27 KB | 1 年前3
 CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5.4v1.18 Controls 1.1 Etcd Node Configuration Files 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) 1.1.12 Ensure that the etcd data directory ownership that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Automated) 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated) 1.1.1 Ensure that that the API server pod specification file permissions are set to 644 or more restrictive (Automated) 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) 10 码力 | 132 页 | 1.12 MB | 1 年前3
 Rancher CIS Kubernetes v.1.4.0 Benchmark Self
Assessmentority argument is set as appropriate (Scored) 1.4.11 - Ensure that the etcd data directory permissions are set to 700 or more-restrictive (Scored) 1.4.12 - Ensure that the etcd data directory ownership kube-apiserver and kubelet . Mitigation Make sure nodes with role:controlplane are on the same local network as your nodes with role:worker . Use network ACLs to restrict connections to the kubelet Result: Pass 1.4 - Configuration Files 1.4.1 - Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored) Notes RKE doesn't require or maintain a configuration0 码力 | 47 页 | 302.56 KB | 1 年前3
 OpenShift Container Platform 4.14 Operator该捆绑包部署的置备程序定义。 配置 配置为与普通置 与普通置备程序一起工作的 程序一起工作的 BundleDeployment 对象示例 象示例 ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml  apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: 要匹配的标签 标签-admin olm.opgroup.permissions/aggregate-to- admin: -edit olm.opgroup.permissions/aggregate-to-edit: 0 码力 | 423 页 | 4.26 MB | 1 年前3
 Apache Kyuubi 1.3.0 Documentation[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database release package you downloaded or built contains the rest prerequisites inside already. Components Role Optional Version Remarks Java Java Runtime Environment Required 1.8 Kyuubi is pre-built with Java and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore for Spark SQL0 码力 | 199 页 | 4.42 MB | 1 年前3
共 290 条
- 1
 - 2
 - 3
 - 4
 - 5
 - 6
 - 29
 













