Review K8s local and remote auth instructions (cherry pick to stx 9.0)

This change replaces the usage of Service Tokens by OIDC tokens in the
instructions of Kubernetes cluster local and remote access. Some other
changes were made, like the deletion of redundant pages.

Story: 2010738
Task: 49561

Change-Id: Ie8206ecd316efd356a5889899a68f9a9ddbcdfa6
Signed-off-by: Joao Victor Portal <Joao.VictorPortal@windriver.com>
This commit is contained in:
Joao Victor Portal 2024-02-07 09:45:32 -03:00
parent 0aadbc6213
commit 191b184763
28 changed files with 615 additions and 1239 deletions

View File

@ -12,6 +12,13 @@ graphical console of a VM can only be accessed remotely from a workstation with
X Windows (e.g. graphical ubuntu desktop), kubectl, ``virtctl`` and
``virt-viewer`` installed.
.. rubric:: |prereq|
To configure kubectl and helm, you must have configured the **oidc-auth-apps**
|OIDC| Identity Provider (dex) on the target |prod| environment to get
Kubernetes authentication tokens. See :ref:`Set up OIDC Auth Applications
<configure-oidc-auth-applications>` for more information.
.. rubric:: |proc|
Configure kubectl and helm

View File

@ -1429,7 +1429,9 @@ Windows Active Directory
- **Limitation**: The refresh token does not work.
**Workaround**: If the token expires, manually replace the ID token. For
more information, see, :ref:`Obtain the Authentication Token Using the Browser <obtain-the-authentication-token-using-the-browser>`.
more information, see how to retrieve a token using the browser at
:ref:`Configure Kubernetes Client Access
<configure-kubernetes-client-access>`.
- **Limitation**: TLS error logs are reported in the **oidc-dex** container
on subclouds. These logs should not have any system impact.

View File

@ -111,18 +111,4 @@ For more information on configuring Users, Groups, Authorization, and
- :ref:`Configure Users, Groups, and Authorization <configure-users-groups-and-authorization>`
- :ref:`Configure Kubectl with a Context for the User <configure-kubectl-with-a-context-for-the-user>`
For more information on Obtaining the Authentication Token, see:
.. _centralized-vs-distributed-oidc-auth-setup-ul-wf3-jnl-vlb:
- :ref:`Obtain the Authentication Token Using the oidc-auth Shell Script
<obtain-the-authentication-token-using-the-oidc-auth-shell-script>`
- :ref:`Obtain the Authentication Token Using the Browser
<obtain-the-authentication-token-using-the-browser>`
- :ref:`Configure Kubernetes Client Access <configure-kubernetes-client-access>`

View File

@ -1,34 +0,0 @@
.. jgr1582125251290
.. _configure-kubectl-with-a-context-for-the-user:
=============================================
Configure Kubectl with a Context for the User
=============================================
You can set up the kubectl context for the Windows Active Directory or |LDAP|
server **testuser** to authenticate through the **oidc-auth-apps** |OIDC|
Identity Provider (dex).
.. rubric:: |context|
The steps below show this procedure completed on controller-0. You can also
do so from a remote workstation.
.. rubric:: |proc|
#. Set up a cluster in kubectl if you have not done so already.
.. code-block:: none
~(keystone_admin)]$ kubectl config set-cluster mywrcpcluster --server=https://<oam-floating-ip>:6443
#. Set up a context for **testuser** in this cluster in kubectl.
.. code-block:: none
~(keystone_admin)]$ kubectl config set-context testuser@mywrcpcluster --cluster=mywrcpcluster --user=testuser

View File

@ -0,0 +1,219 @@
.. jgr1582125251291
.. _configure-kubernetes-client-access:
==================================
Configure Kubernetes Client Access
==================================
You can configure Kubernetes access for local and remote clients to
authenticate through Windows Active Directory or |LDAP| server using
**oidc-auth-apps** |OIDC| Identity Provider (dex).
.. _configure-kubernetes-local-client-access:
----------------------------------------
Configure Kubernetes Local Client Access
----------------------------------------
.. rubric:: |context|
Use the procedure below to configure Kubernetes access for a user logged in to
the active controller either through SSH or by using the system console.
.. rubric:: |proc|
#. Execute the commands below to create the Kubernetes configuration file for
the logged in user. These commands only need to be executed once. The file
"~/.kube/config" will be created. The user referred in its contents is the
current logged in user.
.. code-block:: none
~$ kubeconfig-setup
~$ source ~/.profile
#. Run **oidc-auth** script in order to authenticate and update user
credentials in the Kubernetes configuration file.
.. code-block:: none
~$ oidc-auth
.. note::
The **oidc-auth** script has the following optional parameters that may
need to be specified:
``-c <OIDC_app_IP>``: This is the IP where the OIDC app is running. When
not provided, it defaults to "oamcontroller", that is an alias to the
controller floating |OAM| IP. There are two instances where this
parameter is used: for local client access inside subclouds
of a centralized setup, where the **oidc-auth-apps** runs only on the
System Controller, and for remote client access.
``-p <password>``: This is the user password. If the user does not enter
the password, the user is prompted to do so. This parameter is essential
in non-interactive shells.
``-u <username>``: This is the user to be authenticated. When not
provided, it defaults to the current logged in user. Usually, this
parameter is needed in remote client access scenarios, where the current
logged in user is different from the user to be authenticated.
``-b <backend_ID>``: This parameter is used to specify the backend used
for authentication. It is only needed if there is more than one backend
configured at **oidc-auth-apps** |OIDC| Identity Provider (Dex).
.. _configure-kubernetes-remote-client-access:
-----------------------------------------
Configure Kubernetes Remote Client Access
-----------------------------------------
.. rubric:: |context|
The access to the Kubernetes cluster from outside the controller can be done
using the remote CLI container or using the host directly. Both options are
described below.
.. _configure-kubernetes-remote-client-access-using-container-backed-remote-cli:
Kubernetes Remote Client Access using the Container-backed Remote CLI
=====================================================================
The steps needed to set up the remote Kubernetes access using the
container-backed remote |CLI| are described in :ref:`Configure Container-backed
Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>` and
:ref:`Use Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>`.
.. _configure-kubernetes-remote-client-access-using-the-host-directly:
Kubernetes Remote Client Access using the Host Directly
=======================================================
.. rubric:: |proc|
#. Install the :command:`kubectl` client CLI on the host. Follow the
instructions on `Install and Set Up kubectl on Linux
<https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>`__. The
example below can be used for Ubuntu.
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Optional: Contact your system administrator for the |prod| Kubernetes
cluster's public root |CA| certificate. Copy this certificate to your system
as ``k8s-ca.crt``. This step is strongly recommended, but it still possible
to connect to the Kubernetes cluster without this certificate.
#. Create an empty Kubernetes configuration file (the default path is
``~/.kube/config``). Execute the commands below to update this file. Use the
|OAM| IP address and the Kubernetes |CA| certificate acquired in the
previous step. If the |OAM| IP is IPv6, use the IP enclosed in brackets
(example: "[fd00::a14:803]"). In the example below, the user is
"admin-user", change it to the name of user you want to authenticate.
.. code-block:: none
$ MYUSER="admin-user"
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443
$ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 k8s-ca.crt)
$ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
$ kubectl config use-context ${MYUSER}@wrcpcluster
If you don't have the Kubernetes |CA| certificate, execute the following
commands instead.
.. code-block:: none
$ MYUSER="admin-user"
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 --insecure-skip-tls-verify
$ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
$ kubectl config use-context ${MYUSER}@wrcpcluster
#. Get a Kubernetes authentication token. There are two options, the first is
through **oidc-auth** script and the second is through the browser. Both
options are described below.
To get the token through **oidc-auth** script, execute the steps below.
#. Install "Python Mechanize" module using the following command:
.. code-block:: none
$ sudo pip install mechanize
#. Install the **oidc-auth** from |dnload-loc|.
#. Execute the command below to get the token and update it in the
Kubernetes configuration file. If the target environment has multiple
backends configured, you will need to use the parameter
``-b <backend_ID>``. If the target environment is a |DC| system with
a centralized setup, you should use the |OAM| IP of the System
Controller.
.. code-block:: none
$ oidc-auth -u ${MYUSER} -c <OAM_IP>
To get the token through a browser, execute the steps below.
#. Use the following URL to login into **oidc-auth-apps** |OIDC| client:
``https://<oam-floating-ip-address>:30555``. If the target environment
is a |DC| system with a centralized setup, you should use the |OAM|
IP of the System Controller.
#. If the |prod| **oidc-auth-apps** has been configured for multiple
'**ldap**' connectors, select the Windows Active Directory or the
|LDAP| server for authentication.
#. Enter your Username and Password.
#. Click Login. The ID token and Refresh token are displayed as follows:
.. code-block:: none
ID Token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjQ4ZjZkYjcxNGI4ODQ5ZjZlNmExM2Y2ZTQzODVhMWE1MjM0YzE1NTQifQ.eyJpc3MiOiJodHRwczovLzEyOC4yMjQuMTUxLjE3MDozMDU1Ni9kZXgiLCJzdWIiOiJDZ2R3ZG5SbGMzUXhFZ1JzWkdGdyIsImF1ZCI6InN0eC1vaWRjLWNsaWVudC1hcHAiLCJleHAiOjE1ODI1NzczMTksImlhdCI6MTU4MjU3NzMwOSwiYXRfaGFzaCI6ImhzRG1kdTFIWGFCcXFNLXBpYWoyaXciLCJlbWFpbCI6InB2dGVzdDEiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6InB2dGVzdDEifQ.TEZ-YMd8kavTGCw_FUR4iGQWf16DWsmqxW89ZlKHxaqPzAJUjGnW5NRdRytiDtf1d9iNIxOT6cGSOJI694qiMVcb-nD856OgCvU58o-e3ZkLaLGDbTP2mmoaqqBYW2FDIJNcV0jt-yq5rc9cNQopGtFXbGr6ZV2idysHooa7rA1543EUpg2FNE4qZ297_WXU7x0Qk2yDNRq-ngNQRWkwsERM3INBktwQpRUg2na3eK_jHpC6AMiUxyyMu3o3FurTfvOp3F0eyjSVgLqhC2Rh4xMbK4LgbBTN35pvnMRwOpL7gJPgaZDd0ttC9L5dBnRs9uT-s2g4j2hjV9rh3KciHQ
Access Token:
wcgw4mhddrk7jd24whofclgmj
Claims:
{
"iss": "https://128.224.151.170:30556/dex",
"sub": "CgdwdnRlc3QxEgRsZGFw",
"aud": "stx-oidc-client-app",
"exp": 1582577319,
"iat": 1582577319,
"at_hash": "hsDmdu1HXaBqqM-piaj2iw",
"email": "testuser",
"email_verified": true,
"groups": [
"billingDeptGroup",
"managerGroup"
],
"name": "testuser"
}
Refresh Token:
ChljdmoybDZ0Y3BiYnR0cmp6N2xlejNmd3F5Ehlid290enR5enR1NWw1dWM2Y2V4dnVlcHli
#. Use the token ID to set the Kubernetes credentials in kubectl configs:
.. code-block:: none
$ TOKEN=<ID_token_string>
$ kubectl config set-credentials ${MYUSER} --token ${TOKEN}

View File

@ -41,6 +41,3 @@ either of the above two methods.
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -1,258 +0,0 @@
.. oiz1581955060428
.. _configure-remote-helm-client-for-non-admin-users:
===============================
Configure Remote Helm v2 Client
===============================
Helm v3 is recommended for users to install and manage their
containerized applications. However, Helm v2 may be required, for example, if
the containerized application supports only a Helm v2 chart.
.. rubric:: |context|
Helm v2 is only supported remotely. Also, it is only supported with kubectl and
Helm v2 clients configured directly on the remote host workstation. In
addition to installing the Helm v2 clients, users must also create their own
Tiller server, in a namespace that the user has access, with the required |RBAC|
capabilities and optionally |TLS| protection.
Complete the following steps to configure Helm v2 for managing containerized
applications with a Helm v2 chart.
.. rubric:: |proc|
.. _configure-remote-helm-client-for-non-admin-users-steps-isx-dsd-tkb:
#. On the controller, create an admin-user service account if this is not
already available.
#. Create the **admin-user** service account in **kube-system**
namespace and bind the **cluster-admin** ClusterRoleBinding to this user.
.. code-block:: none
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: admin-user-sa-token
namespace: kube-system
annotations:
kubernetes.io/service-account.name: admin-user
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
% kubectl apply -f admin-login.yaml
#. Retrieve the secret token.
.. code-block:: none
% kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
#. On the workstation, if it is not available, install the :command:`kubectl` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.
#. Install the :command:`kubectl` client CLI.
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by
the |prod-long| K8S API, you must ensure that the
**k8s_root_ca_cert** specified at install time is a trusted
CA certificate by your host. Follow the instructions for adding
a trusted CA certificate for the operating system distribution
of your particular host.
If you did not specify a **k8s_root_ca_cert** at install
time, then specify ``--insecure-skip-tls-verify``, as shown below.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://<oam-floating-IP>:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials admin-user@mycluster --token=$TOKEN_DATA
% kubectl config set-context admin-user@mycluster --cluster=mycluster \
--user admin-user@mycluster --namespace=default
% kubectl config use-context admin-user@mycluster
``$TOKEN_DATA`` is the token retrieved in step 1.
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
compute-0 Ready <none> 9d v1.24.4 192.168.204.69 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
compute-1 Ready <none> 9d v1.24.4 192.168.204.7 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
controller-0 Ready control-plane,master 9d v1.24.4 192.168.204.3 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
controller-1 Ready control-plane,master 9d v1.24.4 192.168.204.4 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-6-amd64 containerd://1.4.12
%
#. Install the Helm v2 client on remote workstation.
.. code-block:: none
% wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
% tar xvf helm-v2.13.1-linux-amd64.tar.gz
% sudo cp linux-amd64/helm /usr/local/bin
Verify that :command:`helm` is installed correctly.
.. code-block:: none
% helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
#. On the workstation, set the namespace for which you want Helm v2 access to.
.. code-block:: none
~(keystone_admin)]$ NAMESPACE=default
#. On the workstation, set up accounts, roles and bindings for Tiller (Helm v2 cluster access).
#. Execute the following commands.
.. note::
These commands could be run remotely by the non-admin user who
has access to the default namespace.
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > default-tiller-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller
namespace: default
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tiller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: default
EOF
~(keystone_admin)]$ kubectl apply -f default-tiller-sa.yaml
#. Execute the following commands as an admin-level user.
.. code-block:: none
~(keystone_admin)]$ kubectl create clusterrole tiller --verb get --resource namespaces
~(keystone_admin)]$ kubectl create clusterrolebinding tiller --clusterrole tiller --serviceaccount ${NAMESPACE}:tiller
#. On the workstation, initialize Helm v2 access with :command:`helm init`
command to start Tiller in the specified NAMESPACE with the specified RBAC
credentials.
.. code-block:: none
~(keystone_admin)]$ helm init --service-account=tiller --tiller-namespace=$NAMESPACE --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1\n \ selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' > helm-init.yaml
~(keystone_admin)]$ kubectl apply -f helm-init.yaml
~(keystone_admin)]$ helm init --client-only --stable-repo-url https://charts.helm.sh/stable
.. note::
Ensure that each of the patterns between single quotes in the above
:command:`sed` commands are on single lines when run from your
command-line interface.
.. note::
Add the following options if you are enabling TLS for this Tiller:
``--tiller-tls``
Enable TLS on Tiller.
``--tiller-tls-cert <certificate_file>``
The public key/certificate for Tiller (signed by ``--tls-ca-cert``).
``--tiller-tls-key <key_file>``
The private key for Tiller.
``--tiller-tls-verify``
Enable authentication of client certificates (i.e. validate
they are signed by ``--tls-ca-cert``).
``--tls-ca-cert <certificate_file>``
The public certificate of the |CA| used for signing Tiller
server and helm client certificates.
.. rubric:: |result|
You can now use the private Tiller server remotely by specifying
the ``--tiller-namespace`` default option on all helm CLI commands. For
example:
.. code-block:: none
helm version --tiller-namespace default
helm install --name wordpress stable/wordpress --tiller-namespace default
.. seealso::
:ref:`Configure Container-backed Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>`
:ref:`Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>`
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`

View File

@ -23,8 +23,8 @@ permissions.
Grant Kubernetes permissions through direct role binding
--------------------------------------------------------
#. Create the following deployment file and deploy the file with :command:
`kubectl apply -f` <filename>.
#. Create the following deployment file and deploy the file with
:command:`kubectl apply -f` <filename>.
.. code-block:: none
@ -47,8 +47,8 @@ Grant Kubernetes permissions through direct role binding
Grant Kubernetes permissions through groups
-------------------------------------------
#. Create the following deployment file and deploy the file with :command:
`kubectl apply -f` <filename>.
#. Create the following deployment file and deploy the file with
:command:`kubectl apply -f` <filename>.
.. code-block:: none

View File

@ -1,81 +0,0 @@
.. ily1578927061566
.. _create-an-admin-type-service-account:
====================================
Create an Admin Type Service Account
====================================
An admin type user typically has full permissions to cluster-scoped
resources as well as full permissions to all resources scoped to any
namespaces.
.. rubric:: |context|
A cluster-admin ClusterRole is defined by default for such a user. To create
an admin service account with cluster-admin role, use the following procedure:
.. note::
It is recommended that you create and manage service accounts within the
kube-system namespace.
.. rubric:: |proc|
#. Create the user definition.
For example:
.. code-block:: none
% cat <<EOF > admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: admin-user-sa-token
namespace: kube-system
annotations:
kubernetes.io/service-account.name: admin-user
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
#. Apply the configuration.
.. code-block:: none
% kubectl apply -f admin-user.yaml
..
.. rubric:: |postreq|
.. xbooklink
See |sysconf-doc|: :ref:`Configure Remote CLI Access
<configure-remote-cli-access>` for details on how to setup remote CLI
access using tools such as :command:`kubectl` and :command:`helm` for a
service account such as this.
.. note::
|prod| can also use user accounts defined in an external Windows Active
Directory to authenticate Kubernetes API, :command:`kubectl` CLI or the
Kubernetes Dashboard. For more information, see :ref:`Configure OIDC
Auth Applications <configure-oidc-auth-applications>`.

View File

@ -17,7 +17,6 @@ System Accounts
types-of-system-accounts
overview-of-system-accounts
kube-service-account
keystone-accounts
remote-windows-active-directory-accounts
starlingx-system-accounts-system-account-password-rules
@ -77,23 +76,7 @@ K8S API User Authentication Using LDAP Server
configure-kubernetes-for-oidc-token-validation-after-bootstrapping-the-system
configure-oidc-auth-applications
configure-users-groups-and-authorization
configure-kubectl-with-a-context-for-the-user
Obtain the Authentication Token
*******************************
.. toctree::
:maxdepth: 1
obtain-the-authentication-token-using-the-oidc-auth-shell-script
obtain-the-authentication-token-using-the-browser
Deprovision LDAP Server
***********************
.. toctree::
:maxdepth: 1
configure-kubernetes-client-access
deprovision-ldap-server-authentication
****************

View File

@ -1,15 +0,0 @@
.. lpl1607977081524
.. _kube-service-account:
===========================
Kubernetes Service Accounts
===========================
|prod| uses Kubernetes service accounts and |RBAC| policies for authentication
and authorization of users of the Kubernetes API, |CLI|, and Dashboard.
.. toctree::
:maxdepth: 1
create-an-admin-type-service-account

View File

@ -18,45 +18,16 @@ Linux Accounts <create-ldap-linux-accounts>`.
.. _kubernetes-cli-from-local-ldap-linux-account-login-ul-afg-rcn-ynb:
- You must have a Kubernetes Service Account.
- See :ref:`Creating an Admin Type Service Account
<create-an-admin-type-service-account>` for details on how to create an admin
level service account. For more clarifications, ask your 'sysadmin'.
.. rubric:: |context|
It is recommended to use the same username for both your Local |LDAP| user and
your Kubernetes Service Account.
You must configure the **oidc-auth-apps** |OIDC| Identity Provider (dex) to get
Kubernetes authentication tokens. See :ref:`Set up OIDC Auth Applications
<configure-oidc-auth-applications>` for more information.
.. rubric:: |proc|
#. Add your Local |LDAP| user account to the 'root' group in order to get
access to execute :command:`kubectl`.
If you have sudo permissions, run the following command first, and then
re-ssh to your local |LDAP| user account, otherwise the 'sysadmin' will have
to execute this step.
.. code-block:: none
$sudo usermod -a -G root <ldapusername>
#. Configure :command:`kubectl` access.
.. note::
Your 'sysadmin' should have given you a TOKEN while setting up your
Kubernetes Service Account.
Execute the following commands:
.. code-block:: none
$ kubectl config set-cluster mycluster --server=https://192.168.206.1:6443 --insecure-skip-tls-verify
$ kubectl config set-credentials joe-admin@mycluster --token=$TOKEN
$ kubectl config set-context joe-admin@mycluster --cluster=mycluster --user joe-admin@mycluster
$ kubectl config use-context joe-admin@mycluster
You now have admin access to |prod| Kubernetes cluster.
#. Assign Kubernetes permissions to the user. See :ref:`Configure Users,
Groups, and Authorization <configure-users-groups-and-authorization>` for
more information.
#. Configure kubectl access. See :ref:`Configure Kubernetes Client Access
<configure-kubernetes-client-access>` to setup the Kubernetes configuration
file and get an authentication token.

View File

@ -1,84 +0,0 @@
.. fvd1581384193662
.. _obtain-the-authentication-token-using-the-browser:
=================================================
Obtain the Authentication Token Using the Browser
=================================================
You can obtain the authentication token using the **oidc-auth-apps** |OIDC|
client web interface.
.. rubric:: |context|
Use the following steps to obtain the authentication token for id-token and
refresh-token using the **oidc-auth-apps** |OIDC| client web interface.
.. rubric:: |proc|
#. Use the following URL to login into **oidc-auth-apps** |OIDC| client:
``https://<oam-floating-ip-address>:30555``
#. If the |prod| **oidc-auth-apps** has been configured for multiple
'**ldap**' connectors, select the Windows Active Directory or the |LDAP|
server for authentication.
#. Enter your Username and Password.
#. Click Login. The ID token and Refresh token are displayed as follows:
.. code-block:: none
ID Token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjQ4ZjZkYjcxNGI4ODQ5ZjZlNmExM2Y2ZTQzODVhMWE1MjM0YzE1NTQifQ.eyJpc3MiOiJodHRwczovLzEyOC4yMjQuMTUxLjE3MDozMDU1Ni9kZXgiLCJzdWIiOiJDZ2R3ZG5SbGMzUXhFZ1JzWkdGdyIsImF1ZCI6InN0eC1vaWRjLWNsaWVudC1hcHAiLCJleHAiOjE1ODI1NzczMTksImlhdCI6MTU4MjU3NzMwOSwiYXRfaGFzaCI6ImhzRG1kdTFIWGFCcXFNLXBpYWoyaXciLCJlbWFpbCI6InB2dGVzdDEiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6InB2dGVzdDEifQ.TEZ-YMd8kavTGCw_FUR4iGQWf16DWsmqxW89ZlKHxaqPzAJUjGnW5NRdRytiDtf1d9iNIxOT6cGSOJI694qiMVcb-nD856OgCvU58o-e3ZkLaLGDbTP2mmoaqqBYW2FDIJNcV0jt-yq5rc9cNQopGtFXbGr6ZV2idysHooa7rA1543EUpg2FNE4qZ297_WXU7x0Qk2yDNRq-ngNQRWkwsERM3INBktwQpRUg2na3eK_jHpC6AMiUxyyMu3o3FurTfvOp3F0eyjSVgLqhC2Rh4xMbK4LgbBTN35pvnMRwOpL7gJPgaZDd0ttC9L5dBnRs9uT-s2g4j2hjV9rh3KciHQ
Access Token:
wcgw4mhddrk7jd24whofclgmj
Claims:
{
"iss": "https://128.224.151.170:30556/dex",
"sub": "CgdwdnRlc3QxEgRsZGFw",
"aud": "stx-oidc-client-app",
"exp": 1582577319,
"iat": 1582577319,
"at_hash": "hsDmdu1HXaBqqM-piaj2iw",
"email": "testuser",
"email_verified": true,
"groups": [
"billingDeptGroup",
"managerGroup"
],
"name": "testuser"
}
Refresh Token:
ChljdmoybDZ0Y3BiYnR0cmp6N2xlejNmd3F5Ehlid290enR5enR1NWw1dWM2Y2V4dnVlcHli
#. Use the token ID to set the Kubernetes credentials in kubectl configs:
.. code-block:: none
~(keystone_admin)]$ TOKEN=<ID_token_string>
~(keystone_admin)]$ kubectl config set-credentials testuser --token $TOKEN
#. Switch to the Kubernetes context for the user, by using the following
command, for example:
.. code-block:: none
~(keystone_admin)]$ kubectl config use-context testuser@mywrcpcluster
#. Run the following command to test that the authentication token
validates correctly:
.. code-block:: none
~(keystone_admin)]$ kubectl get pods --all-namespaces

View File

@ -1,95 +0,0 @@
.. lrf1583447064969
.. _obtain-the-authentication-token-using-the-oidc-auth-shell-script:
================================================================
Obtain the Authentication Token Using the oidc-auth Shell Script
================================================================
You can obtain the authentication token using the **oidc-auth** shell script.
.. rubric:: |context|
You can use the **oidc-auth** script both locally on the active controller,
as well as on a remote workstation where you are running **kubectl** and
**helm** commands.
The **oidc-auth** script retrieves the ID token from Windows Active
Directory or |LDAP| server using the |OIDC| client, and **dex**, and updates the
Kubernetes credential for the user in the **kubectl** config file.
.. _obtain-the-authentication-token-using-the-oidc-auth-shell-script-ul-kxm-qnf-ykb:
- On controller-0, **oidc-auth** is installed as part of the base |prod|
installation, and ready to use.
- On remote hosts, **oidc-auth** must be installed from |dnload-loc|.
.. xbooklink
- On a remote workstation using remote-cli container, **oidc-auth** is
installed within the remote-cli container, and ready to use. For more
information on configuring remote CLI access, see |sysconf-doc|:
:ref:`Configure Remote CLI Access <configure-remote-cli-access>`.
- On a remote host, when using directly installed **kubectl** and **helm**,
the following setup is required:
- Install "Python Mechanize" module using the following command:
.. code-block:: none
sudo pip2 install mechanize
.. note::
**oidc-auth** script supports authenticating with a |prod|
**oidc-auth-apps** configured with single, or multiple **ldap**
connectors.
.. rubric:: |proc|
#. Run **oidc-auth** script in order to authenticate and update user
credentials in **kubectl** config file with the retrieved token.
- If **oidc-auth-apps** is deployed with a single backend **ldap**
connector, run the following command:
.. code-block:: none
~(keystone_admin)]$ oidc-auth -c <ip> -u <username>
For example,
.. code-block:: none
~(keystone_admin)]$ oidc-auth -c <OAM_ip_address> -u testuser
Password:
Login succeeded.
Updating kubectl config ...
User testuser set.
- If **oidc-auth-apps** is deployed with multiple backend **ldap**
connectors, run the following command:
.. code-block:: none
~(keystone_admin)]$ oidc-auth -b <connector-id> -c <ip> -u <username>
.. note::
If you are running **oidc-auth** within the |prod| containerized remote
CLI, you must use the ``-p <password>`` option to run the command
non-interactively.
When the parameter ``-c <ip>`` is ommitted, the hostname
**oamcontroller** is used. This parameter can be ommitted when
**oidc-auth** is executed inside a |prod| active controller and the
**oidc-auth-apps** is running in this controller.
When the parameter ``-u <username>`` is ommitted, the Linux username of
the current logged in user is used.

View File

@ -24,7 +24,7 @@ A brief description of the system accounts available in a |prod| system.
These are local LDAP accounts that are centrally managed across all hosts
in the cluster. These accounts are intended to provide additional admin
level user accounts (in addition to sysadmin) that can SSH to the nodes
of the |prod|.
of the |prod| and/or access its Kubernetes cluster.
See :ref:`Local LDAP Linux User Accounts <local-ldap-linux-user-accounts>`
and :ref:`Manage Composite Local LDAP Accounts at Scale

View File

@ -251,9 +251,7 @@ with read/write type access to a single private namespace
.. xbooklink
The tiller account of the user's namespace **must** be named
'tiller'. See |sysconf-doc|: :ref:`Configure Remote Helm Client
for Non-Admin Users
<configure-remote-helm-client-for-non-admin-users>`.
'tiller'.
.. code-block:: none

View File

@ -9,4 +9,3 @@ Remote CLI Access
security-configure-container-backed-remote-clis-and-clients
using-container-backed-remote-clis-and-clients
security-install-kubectl-and-helm-clients-directly-on-a-host
configure-remote-helm-client-for-non-admin-users

View File

@ -17,6 +17,11 @@ the remote CLI/client configuration scripts.
.. rubric:: |prereq|
You must have a |WAD| or Local |LDAP| username and password to get the
Kubernetes authentication token, a Keystone username and password to log
into Horizon, the |OAM| IP and, optionally, the Kubernetes |CA| certificate of
the target |prod| environment.
You must have Docker installed on the remote systems you connect from. For
more information on installing Docker, see
`https://docs.docker.com/install/ <https://docs.docker.com/install/>`__.
@ -31,12 +36,10 @@ For Windows remote clients, Docker is only supported on Windows 10.
- Adding the Linux user to the docker group
For more information, see,
`https://docs.docker.com/engine/install/linux-postinstall/
<https://docs.docker.com/engine/install/linux-postinstall/>`__
For Windows remote clients, you must run the following commands from a
Cygwin terminal. See `https://www.cygwin.com/ <https://www.cygwin.com/>`__
for more information about the Cygwin project.
@ -60,276 +63,258 @@ CLIs and Clients for an admin user with cluster-admin clusterrole.
.. _security-configure-container-backed-remote-clis-and-clients-d70e93:
#. On the Controller, configure a Kubernetes service account for users on the
remote client.
#. In the active controller, log in through SSH or local console using
**sysadmin** user and do the actions listed below.
You must configure a Kubernetes service account on the target system
and generate a configuration file based on that service account.
#. Configure Kubernetes permissions for users.
Run the following commands logged in as **sysadmin** on the local console
of the controller.
#. Source the platform environment
.. code-block:: none
$ source /etc/platform/openrc
~(keystone_admin)]$
#. Set environment variables.
You can customize the service account name and the output
configuration file by changing the <USER> and <OUTPUT_FILE>
variables shown in the following examples.
.. code-block:: none
~(keystone_admin)]$ USER="admin-user"
~(keystone_admin)]$ OUTPUT_FILE="admin-kubeconfig"
#. Create an account definition file.
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${USER}
namespace: kube-system
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: ${USER}-sa-token
namespace: kube-system
annotations:
kubernetes.io/service-account.name: ${USER}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: ${USER}
namespace: kube-system
EOF
#. Apply the definition.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f admin-login.yaml
#. Store the token value.
.. code-block:: none
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}')
#. Store the |OAM| IP address.
#. .. code-block:: none
~(keystone_admin)]$ OAM_IP=$(system oam-show |grep oam_floating_ip| awk '{print $4}')
#. |AIO-SX| uses <oam_ip> instead of <oam_floating_ip>. The
following shell code ensures that <OAM_IP> is assigned the correct
IP address.
#. Source the platform environment
.. code-block:: none
~(keystone_admin)]$ if [ -z "$OAM_IP" ]; then
OAM_IP=$(system oam-show |grep oam_ip| awk '{print $4}')
fi
$ source /etc/platform/openrc
~(keystone_admin)]$
#. IPv6 addresses must be enclosed in square brackets. The following
shell code does this for you.
#. Create a user rolebinding file. You can customize the name of the
user. Alternatively, to use group rolebinding and user group
membership for authorization, see :ref:`Configure Users, Groups, and
Authorization <configure-users-groups-and-authorization>`
.. code-block:: none
~(keystone_admin)]$ if [[ $OAM_IP =~ .*:.* ]]; then
OAM_IP="[${OAM_IP}]"
fi
~(keystone_admin)]$ MYUSER="admin-user"
~(keystone_admin)]$ cat <<EOF > admin-user-rolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ${MYUSER}-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${MYUSER}
EOF
#. Change the permission to be readable.
#. Apply the rolebinding.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f admin-user-rolebinding.yaml
#. Note the |OAM| IP address to be used later in the creation of the
kubeconfig file.
.. code-block:: none
~(keystone_admin)]$ touch ${OUTPUT_FILE}
~(keystone_admin)]$ sudo chown sysadmin:sys_protected ${OUTPUT_FILE}
sudo chmod 644 ${OUTPUT_FILE}
~(keystone_admin)]$ system oam-show | grep oam_floating_ip | awk '{print $4}'
#. Generate the admin-kubeconfig file.
Use the command below in |AIO-SX| environments. |AIO-SX| uses <oam_ip>
instead of <oam_floating_ip>.
.. code-block:: none
~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-cluster wrcp-cluster --server=https://${OAM_IP}:6443 --insecure-skip-tls-verify
~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-credentials ${USER} --token=$TOKEN_DATA
~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-context ${USER}@wrcp-cluster --cluster=wrcp-cluster --user ${USER} --namespace=default
~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} use-context ${USER}@wrcp-cluster
~(keystone_admin)]$ system oam-show | grep oam_ip | awk '{print $4}'
#. Copy the remote client tarball file from the |prod| build servers
to the remote workstation, and extract its content.
- The tarball is available from the StarlingX Public build servers.
- You can extract the tarball's contents anywhere on your client system.
.. parsed-literal::
$ cd $HOME
$ tar xvf |prefix|-remote-clients-<version>.tgz
#. Download the user/tenant openrc file from the Horizon Web interface to the
remote workstation.
#. Log in to Horizon as the user and tenant that you want to configure remote access for.
In this example, the 'admin' user in the 'admin' tenant.
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC file**.
#. Select **Openstack RC file**.
The file admin-openrc.sh downloads.
.. note::
For a Distributed Cloud system, navigate to **Project** \> **Central
Cloud Regions** \> **RegionOne** \> and download the **Openstack RC
file**.
#. If HTTPS has been enabled for the |prod| RESTAPI Endpoints on your
|prod| system, add the following line to the bottom of ``admin-openrc.sh``:
.. code-block:: none
OS_CACERT=<path_to_ca_>
where ``<path_to_ca>`` is the full filename of the |PEM| file for the |CA|
Certificate that signed the |prod| REST APIs Endpoint Certificate.
Copy the file ``admin-openrc.sh`` to the remote workstation. This example
assumes it is copied to the location of the extracted tarball.
#. Copy the admin-kubeconfig file to the remote workstation.
You can copy the file to any location on the remote workstation. This
example assumes that it is copied to the location of the extracted tarball.
#. On the remote workstation, configure remote CLI/client access.
This step will also generate a remote CLI/client RC file.
#. Change to the location of the extracted tarball.
.. parsed-literal::
$ cd $HOME/|prefix|-remote-clients-<version>/
#. Create a working directory that will be mounted by the container
implementing the remote |CLIs|.
See the description of the :command:`configure_client.sh` -w option
:ref:`below
<security-configure-container-backed-remote-clis-and-clients>`
for more details.
#. If HTTPS has been enabled for the |prod| RESTAPI Endpoints on your
|prod| system, execute the following commands to create the |CA|
certificate and copy it to the remote workstation.
.. code-block:: none
$ mkdir -p $HOME/remote_cli_wd
~(keystone_admin)]$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > /home/sysadmin/stx.ca.crt
~(keystone_admin)]$ scp /home/sysadmin/stx.ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/stx.ca.crt
#. Optional: copy the Kubernetes |CA| certificate
``/etc/kubernetes/pki/ca.crt`` from the active controller to the remote
workstation. This step is strongly recommended, but it still possible
to connect to the Kubernetes cluster without this certificate.
#. Run the :command:`configure_client.sh` script.
.. code-block:: none
~(keystone_admin)]$ scp /etc/kubernetes/pki/ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/k8s-ca.crt
#. In the remote workstation, do the actions listed below.
#. Copy the remote client tarball file from the |prod| build servers
to the remote workstation, and extract its content.
- The tarball is available from the StarlingX Public build servers.
- You can extract the tarball's contents anywhere on your client system.
.. parsed-literal::
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
$ cd $HOME
$ tar xvf |prefix|-remote-clients-<version>.tgz
If you specify repositories that require authentication, as shown
above, you must first perform a :command:`docker login` to that
repository before using remote |CLIs|. WRS |AWS| ECR credentials or a
|CA| certificate is required.
#. Download the user/tenant openrc file from the Horizon Web interface to
the remote workstation.
The options for configure_client.sh are:
#. Log in to Horizon as the user and tenant that you want to configure
remote access for.
``-t``
The type of client configuration. The options are platform (for
|prod-long| |CLI| and clients) and openstack (for |prod-os| application
|CLI| and clients).
In this example, the 'admin' user in the 'admin' tenant.
The default value is platform.
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC
file**.
``-r``
The user/tenant RC file to use for :command:`openstack` CLI commands.
#. Select **Openstack RC file**.
The default value is admin-openrc.sh.
The file ``admin-openrc.sh`` downloads. Copy this file to the
location of the extracted tarball.
``-k``
The kubernetes configuration file to use for :command:`kubectl` and :command:`helm` CLI commands.
.. note::
The default value is temp-kubeconfig.
For a Distributed Cloud system, navigate to **Project** \> **Central
Cloud Regions** \> **RegionOne** \> and download the **Openstack RC
file**.
``-o``
The remote CLI/client RC file generated by this script.
#. If HTTPS has been enabled for the |prod| RESTAPI Endpoints on your
|prod| system, add the following line to the bottom of
``admin-openrc.sh``:
This RC file needs to be sourced in the shell, to setup required
environment variables and aliases, before running any remote |CLI|
commands.
.. code-block:: none
For the platform client setup, the default is
remote_client_platform.sh. For the openstack application client
setup, the default is remote_client_app.sh.
export OS_CACERT=<path_to_ca>
``-w``
The working directory that will be mounted by the container
implementing the remote |CLIs|. When using the remote |CLIs|, any files
passed as arguments to the remote |CLI| commands need to be in this
directory in order for the container to access the files. The default
value is the directory from which the :command:`configure_client.sh`
command was run.
where ``<path_to_ca>`` is the absolute path of the file ``stx.ca.crt``
acquired in the steps above.
``-p``
Override the container image for the platform |CLI| and clients.
#. Create an empty admin-kubeconfig file on the remote workstation using
the following command.
By default, the platform |CLIs| and clients container image is pulled
from docker.io/starlingx/stx-platformclients.
.. code-block:: none
For example, to use the container images from the WRS AWS ECR:
$ touch admin-kubeconfig
.. parsed-literal::
#. Configure remote CLI/client access.
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
This step will also generate a remote CLI/client RC file.
If you specify repositories that require authentication, you must first
perform a :command:`docker login` to that repository before using
remote |CLIs|.
#. Change to the location of the extracted tarball.
``-a``
Override the OpenStack application image.
.. parsed-literal::
By default, the OpenStack |CLIs| and clients container image is pulled
from docker.io/starlingx/stx-openstackclients.
$ cd $HOME/|prefix|-remote-clients-<version>/
The :command:`configure-client.sh` command will generate a
``remote_client_platform.sh`` RC file. This RC file needs to be sourced in
the shell to set up required environment variables and aliases before any
remote CLI commands can be run.
#. Create a working directory that will be mounted by the container
implementing the remote |CLIs|.
#. Copy the file ``remote_client_platform.sh`` to ``$HOME/remote_cli_wd``
See the description of the :command:`configure_client.sh` -w option
below for more details.
.. code-block:: none
$ mkdir -p $HOME/remote_cli_wd
#. Run the :command:`configure_client.sh` script.
.. parsed-literal::
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
If you specify repositories that require authentication, as shown
above, you must first perform a :command:`docker login` to that
repository before using remote |CLIs|. WRS |AWS| ECR credentials or
a |CA| certificate is required.
The options for configure_client.sh are:
``-t``
The type of client configuration. The options are platform (for
|prod-long| |CLI| and clients) and openstack (for |prod-os|
application |CLI| and clients).
The default value is platform.
``-r``
The user/tenant RC file to use for :command:`openstack` CLI
commands.
The default value is admin-openrc.sh.
``-k``
The kubernetes configuration file to use for :command:`kubectl`
and :command:`helm` CLI commands.
The default value is temp-kubeconfig.
``-o``
The remote CLI/client RC file generated by this script.
This RC file needs to be sourced in the shell, to setup required
environment variables and aliases, before running any remote
|CLI| commands.
For the platform client setup, the default is
remote_client_platform.sh. For the openstack application client
setup, the default is remote_client_app.sh.
``-w``
The working directory that will be mounted by the container
implementing the remote |CLIs|. When using the remote |CLIs|,
any files passed as arguments to the remote |CLI| commands need
to be in this directory in order for the container to access the
files. The default value is the directory from which the
:command:`configure_client.sh` command was run.
``-p``
Override the container image for the platform |CLI| and clients.
By default, the platform |CLIs| and clients container image is
pulled from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the WRS AWS ECR:
.. parsed-literal::
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
If you specify repositories that require authentication, you
must first perform a :command:`docker login` to that repository
before using remote |CLIs|.
``-a``
Override the OpenStack application image.
By default, the OpenStack |CLIs| and clients container image is
pulled from docker.io/starlingx/stx-openstackclients.
The :command:`configure-client.sh` command will generate a
``remote_client_platform.sh`` RC file. This RC file needs to be
sourced in the shell to set up required environment variables and
aliases before any remote CLI commands can be run.
#. Copy the file ``remote_client_platform.sh`` to ``$HOME/remote_cli_wd``
#. Update the contents in the admin-kubeconfig file using the
:command:`kubectl` command from the container. Use the |OAM| IP address
and the Kubernetes |CA| certificate acquired in the steps above. If the
|OAM| IP is IPv6, use the IP enclosed in brackets (example:
"[fd00::a14:803]").
.. code-block:: none
$ cd $HOME/remote_cli_wd
$ source remote_client_platform.sh
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443
$ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 k8s-ca.crt)
$ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
$ kubectl config use-context ${MYUSER}@wrcpcluster
If you don't have the Kubernetes |CA| certificate, execute the following
commands instead.
.. code-block:: none
$ cd $HOME/remote_cli_wd
$ source remote_client_platform.sh
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 --insecure-skip-tls-verify
$ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
$ kubectl config use-context ${MYUSER}@wrcpcluster
.. rubric:: |postreq|
@ -343,7 +328,8 @@ variables and aliases for the remote |CLI| commands.
that your shells will automatically be initialized with the environment
variables and aliases for the remote |CLI| commands.
See :ref:`Using Container-backed Remote CLIs and Clients <using-container-backed-remote-clis-and-clients>` for details.
See :ref:`Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>` for details.
**Related information**
@ -354,7 +340,3 @@ See :ref:`Using Container-backed Remote CLIs and Clients <using-container-backed
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -24,6 +24,13 @@ If using a non-admin user such as one with only role privileges within a
private namespace, the procedure is the same, however, additional
configuration is required in order to use :command:`helm`.
.. rubric:: |prereq|
You must configure the **oidc-auth-apps** |OIDC| Identity Provider (dex)
on the target |prod| environment to get Kubernetes authentication tokens. See
:ref:`Set up OIDC Auth Applications <configure-oidc-auth-applications>` for more
information.
.. rubric:: |proc|
.. _security-install-kubectl-and-helm-clients-directly-on-a-host-steps-f54-qqd-tkb:
@ -39,7 +46,3 @@ configuration is required in order to use :command:`helm`.
:ref:`Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>`
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -12,8 +12,6 @@ This Chapter describes the system accounts available in a |prod| system.
- :ref:`Linux User Accounts <overview-of-system-accounts>`
- :ref:`Kubernetes Service Accounts <kube-service-account>`
- :ref:`Keystone Accounts <keystone-accounts>`
- :ref:`Remote Windows Active Directory Accounts <remote-windows-active-directory-accounts>`

View File

@ -14,6 +14,11 @@ variables and aliases for the remote |CLI| commands.
.. _using-container-backed-remote-clis-and-clients-ul-vcd-4rf-14b:
- You must have configured the **oidc-auth-apps** |OIDC| Identity Provider
(dex) on the target |prod| environment to get Kubernetes authentication
tokens. See :ref:`Set up OIDC Auth Applications
<configure-oidc-auth-applications>` for more information.
- Consider adding the following command to your .login or shell rc file, such
that your shells will automatically be initialized with the environment
variables and aliases for the remote |CLI| commands.
@ -23,8 +28,9 @@ variables and aliases for the remote |CLI| commands.
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# source remote_client_platform.sh
Please enter your OpenStack Password for project admin as user admin-user:
- You must have completed the configuration steps described in
- You must complete the configuration steps described in
:ref:`Configuring Container-backed Remote CLIs and Clients
<security-configure-container-backed-remote-clis-and-clients>`
before proceeding.
@ -35,14 +41,23 @@ variables and aliases for the remote |CLI| commands.
.. rubric:: |proc|
- For simple StarlingX :command:`system` |CLI| and Kubernetes
:command:`kubectl` |CLI| commands:
- To be able to execute :command:`kubectl` |CLI| commands, first it is needed
to get a Kubernetes authentication token. Execute the command below to get
it. The token is stored in the "admin-kubeconfig" file. The validity of the
token is up to 24 hours. A new token should be generated regularly. The
|OAM| IP mentioned below is the IP of the target |prod| environment.
.. note::
The first usage of a remote |CLI| command will be slow as it requires
that the docker image supporting the remote CLIs/clients be pulled from
the remote registry.
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# oidc-auth -c <OAM_IP> -u ${MYUSER} -p <USER_PASSWORD>
- For :command:`system` |CLI| and Kubernetes :command:`kubectl` |CLI| commands:
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# system host-list
@ -167,7 +182,3 @@ variables and aliases for the remote |CLI| commands.
:ref:`Installing Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`
:ref:`Configure Remote Helm v2 Client
<configure-remote-helm-client-for-non-admin-users>`

View File

@ -1,109 +1,45 @@
.. begin-install-proc
#. On the controller, if an **admin-user** service account is not already available, create one.
#. On the controller, create a rolebinding for the **admin-user** user.
Alternatively, to use group rolebinding and user group membership for
authorization, see :ref:`Configure Users, Groups, and Authorization
<configure-users-groups-and-authorization>` for more information.
#. Create the **admin-user** service account in **kube-system**
namespace and bind the **cluster-admin** ClusterRoleBinding to this user.
.. code-block:: none
.. code-block:: none
% MYUSER="admin-user"
% cat <<EOF > admin-user-rolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ${MYUSER}-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${MYUSER}
EOF
% kubectl apply -f admin-user-rolebinding.yaml
% cat <<EOF > admin-login.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-admin
namespace: kube-system
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: kubernetes-admin-sa-token
namespace: kube-system
annotations:
kubernetes.io/service-account.name: kubernetes-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-admin
namespace: kube-system
EOF
% kubectl apply -f admin-login.yaml
#. On the remote workstation, install the :command:`kubectl` client, set up the
Kubernetes configuration and get a token. Follow the steps of section
`Kubernetes Remote Client Access using the Host Directly` at :ref:`Configure
Kubernetes Client Access <configure-kubernetes-client-access>`, then test
the :command:`kubectl` access with the command below.
#. Retrieve the secret token.
.. code-block:: none
.. code-block:: none
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-admin | awk '{print $1}') | grep "token:" | awk '{print $2}')
#. On a remote workstation, install the :command:`kubectl` client. Go to the
following link: `https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
<https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>`__.
#. Install the :command:`kubectl` client CLI (for example, an Ubuntu host).
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by
the |prod-long| K8S API, you must ensure that the
``k8s_root_ca_cert`` specified at install time is a trusted
|CA| certificate by your host. Follow the instructions for adding
a trusted |CA| certificate for the operating system distribution
of your particular host.
If you did not specify a ``k8s_root_ca_cert`` at install
time, then specify ``--insecure-skip-tls-verify``, as shown below.
The following example configures the default ~/.kube/config. See the
following reference:
`https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
<https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/>`__.
You need to obtain a floating |OAM| IP.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://${OAM_IP}:6443 \
--insecure-skip-tls-verify
% kubectl config set-credentials kubernetes-admin@mycluster --token=$TOKEN_DATA
% kubectl config set-context kubernetes-admin@mycluster --cluster=mycluster \
--user kubernetes-admin@mycluster --namespace=default
% kubectl config use-context kubernetes-admin@mycluster
``$TOKEN_DATA`` is the token retrieved in step 1.
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE ...
controller-0 Ready master 15h v1.12.3 192.168.204.3 <none> CentOS L ...
controller-1 Ready master 129m v1.12.3 192.168.204.4 <none> CentOS L ...
worker-0 Ready <none> 99m v1.12.3 192.168.204.201 <none> CentOS L ...
worker-1 Ready <none> 99m v1.12.3 192.168.204.202 <none> CentOS L ...
%
% kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE ...
controller-0 Ready master 15h v1.12.3 192.168.204.3 <none> CentOS L ...
controller-1 Ready master 129m v1.12.3 192.168.204.4 <none> CentOS L ...
worker-0 Ready <none> 99m v1.12.3 192.168.204.201 <none> CentOS L ...
worker-1 Ready <none> 99m v1.12.3 192.168.204.202 <none> CentOS L ...
%
#. On the workstation, install the :command:`helm` client on an Ubuntu
host by taking the following actions on the remote Ubuntu system.

View File

@ -139,8 +139,8 @@ You can now use the private Tiller server remotely or locally by specifying the
:ref:`Configuring Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
:ref:`Using Container-backed Remote CLIs
<usertask-using-container-backed-remote-clis-and-clients>`
:ref:`Use Container-backed Remote CLIs and Clients
<using-container-based-remote-clis-and-clients>`
:ref:`Installing Kubectl and Helm Clients Directly on a Host
<kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host>`

View File

@ -27,10 +27,9 @@ Remote CLI access
remote-cli-access
kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients
usertask-using-container-backed-remote-clis-and-clients
using-container-based-remote-clis-and-clients
kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host
configuring-remote-helm-client
using-container-based-remote-clis-and-clients
----------
GUI access

View File

@ -48,9 +48,13 @@ by the remote CLI/client configuration scripts.
tarball, extract it to any location and change the Windows <PATH> variable
to include its bin folder from the extracted winpty folder.
- You will need a kubectl config file containing your user account and login
credentials from your |prod| administrator.
- You need from your |prod| administrator: your |WAD| or Local |LDAP| username
and password to get a Kubernetes authentication token, your Keystone
username and password to log into Horizon, the |OAM| IP and, optionally, the
Kubernetes |CA| certificate of the target |prod| environment. If HTTPS has
been enabled for the |prod| RESTAPI Endpoints on the target |prod| system,
you need the |CA| certificate that signed the |prod| REST APIs Endpoint
Certificate.
The following procedure helps you configure the Container-backed remote |CLIs|
and clients for a non-admin user.
@ -84,7 +88,8 @@ and clients for a non-admin user.
#. Select :guilabel:`Openstack RC file`.
The file ``my-openrc.sh`` downloads.
The file ``admin-openrc.sh`` downloads. Copy this file to the location of
the extracted tarball.
.. note::
@ -92,17 +97,23 @@ and clients for a non-admin user.
--> Central Cloud Regions --> RegionOne` and download the **Openstack
RC file**.
#. Copy the user-kubeconfig file received from your administrator containing
your user account and credentials to the remote workstation.
#. If HTTPS has been enabled for the |prod| RESTAPI Endpoints on the target
|prod| system, add the following line to the bottom of ``admin-openrc.sh``:
You can copy the file to any location on the remote workstation. For
convenience, this example assumes that it is copied to the location of the
extracted tarball.
.. code-block:: none
.. note::
Confirm that the user-kubeconfig file has 666 permissions after copying
the file to the remote workstation. If necessary, use the following
command to change permissions, :command:`chmod 666 user-kubeconfig`.
export OS_CACERT=<path_to_ca>
where ``<path_to_ca>`` is the absolute path of the |CA| certificate that
signed the |prod| REST APIs Endpoint Certificate provided by your |prod|
administrator.
#. Create an empty user-kubeconfig file on the remote workstation. The
contents will be set later.
.. code-block:: none
$ touch user-kubeconfig
#. On the remote workstation, configure the client access.
@ -116,9 +127,7 @@ and clients for a non-admin user.
implementing the remote |CLIs|.
See the description of the :command:`configure_client.sh` ``-w``
option :ref:`below
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients-w-option>`
for more details.
option below for more details.
.. code-block:: none
@ -130,7 +139,7 @@ and clients for a non-admin user.
.. code-block:: none
$ ./configure_client.sh -t platform -r my_openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd
$ ./configure_client.sh -t platform -r admin-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd
.. only:: partner
@ -192,7 +201,7 @@ and clients for a non-admin user.
.. parsed-literal::
$ ./configure_client.sh -t platform -r my-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
$ ./configure_client.sh -t platform -r admin-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd -p |registry-url|/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
If you specify repositories that require authentication, you must
perform a :command:`docker login` to that repository before using
@ -209,6 +218,35 @@ and clients for a non-admin user.
in the shell to set up required environment variables and aliases
before any remote |CLI| commands can be run.
#. Copy the file ``remote_client_platform.sh`` to ``$HOME/remote_cli_wd``
#. Update the contents in the admin-kubeconfig file using the
:command:`kubectl` command from the container. Use the |OAM| IP address
and the Kubernetes |CA| certificate (named ``k8s-ca.crt`` in the commands
below) got from the |prod| administrator. In the example below, the user is
called "user1", you should change this to your username. If the |OAM| IP is
IPv6, use the IP enclosed in brackets (example: "[fd00::a14:803]").
.. code-block:: none
$ cd $HOME/remote_cli_wd
$ source remote_client_platform.sh
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443
$ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 k8s-ca.crt)
$ kubectl config set-context user1@wrcpcluster --cluster=wrcpcluster --user user1
$ kubectl config use-context user1@wrcpcluster
If you don't have the Kubernetes |CA| certificate, execute the following
commands instead.
.. code-block:: none
$ cd $HOME/remote_cli_wd
$ source remote_client_platform.sh
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 --insecure-skip-tls-verify
$ kubectl config set-context user1@wrcpcluster --cluster=wrcpcluster --user user1
$ kubectl config use-context user1@wrcpcluster
.. rubric:: |postreq|
After configuring the platform's container-backed remote CLIs/clients, the

View File

@ -24,73 +24,35 @@ order to use :command:`helm`.
.. rubric:: |prereq|
You will need the following information from your |prod| administrator:
You need from your |prod| administrator: a |WAD| or Local |LDAP| username and
password to get the Kubernetes authentication token, the |OAM| IP and,
optionally, the Kubernetes |CA| certificate of the target |prod| environment.
.. _kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host-ul-nlr-1pq-nlb:
- the floating |OAM| IP address of the |prod|
- login credential information; in this example, it is the "TOKEN" for a
local Kubernetes ServiceAccount.
You must have the **oidc-auth-apps** |OIDC| Identity Provider (dex) configured
on the target |prod| environment to get Kubernetes authentication tokens.
.. xreflink For a Windows Active Directory user, see,
|sec-doc|: :ref:`Overview of LDAP Servers <overview-of-ldap-servers>`.
- your kubernetes namespace
.. rubric:: |proc|
.. _kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host-steps-f54-qqd-tkb:
#. On the workstation, install the :command:`kubectl` client on an Ubuntu
host by performing the following actions on the remote Ubuntu system.
#. On the workstation, install the :command:`kubectl` client, set up the
Kubernetes configuration and get a token. Follow the steps of section
`Kubernetes Remote Client Access using the Host Directly` at :ref:`Configure
Kubernetes Client Access <configure-kubernetes-client-access>`, then test
the :command:`kubectl` access with the command below.
#. Install the :command:`kubectl` client CLI.
.. code-block:: none
.. code-block:: none
% sudo apt-get update
% sudo apt-get install -y apt-transport-https
% curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
% echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
% sudo apt-get update
% sudo apt-get install -y kubectl
.. _security-install-kubectl-and-helm-clients-directly-on-a-host-local-configuration-context:
#. Set up the local configuration and context.
.. note::
In order for your remote host to trust the certificate used by the
|prod-long| K8s API, you must ensure that the
**k8s_root_ca_cert** provided by your |prod| administrator is a
trusted CA certificate by your host. Follow the instructions for
adding a trusted CA certificate for the operating system
distribution of your particular host.
If your administrator does not provide a **k8s_root_ca_cert**
at the time of installation, then specify
insecure-skip-tls-verify, as shown below.
.. code-block:: none
% kubectl config set-cluster mycluster --server=https://<$CLUSTEROAMIP>:6443 --insecure-skip-tls-verify
% kubectl config set-credentials dave-user@mycluster --token=$MYTOKEN
% kubectl config set-context dave-user@mycluster --cluster=mycluster --user admin-user admin-user@mycluster --namespace=$MYNAMESPACE
% kubectl config use-context dave-user@mycluster
#. Test remote :command:`kubectl` access.
.. code-block:: none
% kubectl get pods -o wide
NAME READY STATUS RE- AGE IP NODE NOMINA- READINESS
STARTS TED NODE GATES
nodeinfo-648f.. 1/1 Running 0 62d 172.16.38.83 worker-4 <none> <none>
nodeinfo-648f.. 1/1 Running 0 62d 172.16.97.207 worker-3 <none> <none>
nodeinfo-648f.. 1/1 Running 0 62d 172.16.203.14 worker-5 <none> <none>
tiller-deploy.. 1/1 Running 0 27d 172.16.97.219 worker-3 <none> <none>
% kubectl get pods -o wide
NAME READY STATUS RE- AGE IP NODE NOMINA- READINESS
STARTS TED NODE GATES
nodeinfo-648f.. 1/1 Running 0 62d 172.16.38.83 worker-4 <none> <none>
nodeinfo-648f.. 1/1 Running 0 62d 172.16.97.207 worker-3 <none> <none>
nodeinfo-648f.. 1/1 Running 0 62d 172.16.203.14 worker-5 <none> <none>
tiller-deploy.. 1/1 Running 0 27d 172.16.97.219 worker-3 <none> <none>
#. On the workstation, install the :command:`helm` client on an Ubuntu host
by performing the following actions on the remote Ubuntu system.
@ -116,4 +78,4 @@ You will need the following information from your |prod| administrator:
:ref:`Using Container-backed Remote CLIs and Clients
<using-container-based-remote-clis-and-clients>`
:ref:`Configuring Remote Helm Client <configuring-remote-helm-client>`
:ref:`Configuring Remote Helm Client <configuring-remote-helm-client>`

View File

@ -1,163 +0,0 @@
.. vja1605798752774
.. _usertask-using-container-backed-remote-clis-and-clients:
================================
Use Container-backed Remote CLIs
================================
Remote platform |CLIs| can be used in any shell after sourcing the generated
remote CLI/client RC file. This RC file sets up the required environment
variables and aliases for the remote CLI commands.
.. contents:: The following topics are discussed below:
:local:
:depth: 1
.. note::
Consider adding this command to your .login or shell rc file, such that
your shells will automatically be initialized with the environment
variables and aliases for the remote CLI commands.
.. rubric:: |prereq|
You must have completed the configuration steps described in
:ref:`Configuring Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
before proceeding.
.. rubric:: |proc|
*******************************
Kubernetes kubectl CLI commands
*******************************
.. note::
The first usage of a remote CLI command will be slow as it requires
that the docker image supporting the remote CLIs/clients be pulled from
the remote registry.
.. code-block:: none
root@myclient:/home/user/remote_wd# source remote_client_platform.sh
Please enter your OpenStack Password for project tenant1 as user user1:
root@myclient:/home/user/remote_wd# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-767467f9cf-wtvmr 1/1 Running 1 3d2h
calico-node-j544l 1/1 Running 1 3d
calico-node-ngmxt 1/1 Running 1 3d1h
calico-node-qtc99 1/1 Running 1 3d
calico-node-x7btl 1/1 Running 4 3d2h
ceph-pools-audit-1569848400-rrpjq 0/1 Completed 0 12m
ceph-pools-audit-1569848700-jhv5n 0/1 Completed 0 7m26s
ceph-pools-audit-1569849000-cb988 0/1 Completed 0 2m25s
coredns-7cf476b5c8-5x724 1/1 Running 1 3d2h
...
root@myclient:/home/user/remote_wd#
.. note::
Some CLI commands are designed to leave you in a shell prompt, for
example:
.. code-block:: none
root@myclient:/home/user/remote_wd# openstack
or
.. code-block:: none
root@myclient:/home/user/remote_wd# kubectl exec -ti <pod_name> -- /bin/bash
In most cases, the remote CLI will detect and handle these commands
correctly. If you encounter cases that are not handled correctly, you
can force-enable or disable the shell options using the <FORCE_SHELL>
or <FORCE_NO_SHELL> variables before the command.
For example:
.. code-block:: none
root@myclient:/home/user/remote_wd# FORCE_SHELL=true kubectl exec -ti <pod_name> -- /bin/bash
root@myclient:/home/user/remote_wd# FORCE_NO_SHELL=true kubectl exec <pod_name> -- ls
You cannot use both variables at the same time.
************************************
Remote CLI commands with local files
************************************
If you need to run a remote CLI command that references a local file, then
that file must be copied to or created in the working directory specified
in the ``-w`` option on the ``./config_client.sh`` command.
For example:
#. If you have not already done so, source the ``remote_client_platform.sh``
file.
.. code-block:: none
root@myclient:/home/user/remote_wd# source remote_client_platform.sh
#. Copy the file local file and run the remote command.
.. code-block:: none
root@myclient:/home/user# cp /<someDir>/test.yml $HOME/remote_cli_wd/test.yml
root@myclient:/home/user# cd $HOME/remote_cli_wd
root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system create -f test.yml
pod/test-pod created
root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system delete -f test.yml
pod/test-pod deleted
****
Helm
****
Do the following to use helm.
.. note::
For non-admin users, additional configuration is required first as
discussed in |sec-doc|: :ref:`Configuring Remote Helm Client for
Non-Admin Users <configure-remote-helm-client-for-non-admin-users>`.
.. note::
When using helm, any command that requires access to a helm repository
(managed locally) will require that you be in the
``$HOME/remote_cli_wd`` directory and use the ``--home "./.helm"`` option.
#. Do the initial set-up of the helm client.
#. If you have not already done so, source the ``remote_client_platform.sh``
file.
.. code-block:: none
% source remote_client_platform.sh
#. Complete the initial set-up.
.. code-block:: none
% cd $HOME/remote_cli_wd
% helm init --client-only --home "./.helm"
#. Run a helm command.
#. If you have not already done so, source the ``remote_client_platform.sh``
file.
.. code-block:: none
% source remote_client_platform.sh
#. Run a helm command. This example installs WordPress.
.. code-block:: none
% cd $HOME/remote_cli_wd
% helm list
% helm install --name wordpress stable/wordpress --home "./.helm"

View File

@ -22,19 +22,30 @@ variables and aliases for the remote |CLI| commands.
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# source remote_client_platform.sh
Please enter your OpenStack Password for project admin as user admin-user:
If you specified repositories that require authentication when configuring
the container-backed remote |CLIs|, you must perform a :command:`docker
login` to that repository before using remote |CLIs| for the first time
You must have completed the configuration steps described in :ref:`Configuring
Container-backed Remote CLIs and Clients
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-clients>`
before proceeding.
- You must complete the configuration steps described in
:ref:`Configure Container-backed Remote CLIs
<kubernetes-user-tutorials-configuring-container-backed-remote-clis-and-\
clients>` before proceeding.
- You must have the **oidc-auth-apps** |OIDC| Identity Provider
(dex) configured on the target |prod| environment to get Kubernetes
authentication tokens.
.. rubric:: |proc|
- For Kubernetes :command:`kubectl` |CLI| commands:
- To be able to execute :command:`kubectl` |CLI| commands, first it is needed
to get a Kubernetes authentication token. Execute the command below to get
it. In this example, the user is called "user1", you should change this to
your username. The token is stored in the "user-kubeconfig" file. The
validity of the token is up to 24 hours. A new token should be generated
regularly. The |OAM| IP mentioned below is the IP of the target |prod|
environment.
.. note::
The first usage of a remote |CLI| command will be slow as it requires
@ -43,7 +54,11 @@ before proceeding.
.. code-block:: none
Please enter your OpenStack Password for project tenant1 as user user1:
root@myclient:/home/user/remote_cli_wd# oidc-auth -c <OAM_IP> -u user1 -p <USER_PASSWORD>
- For Kubernetes :command:`kubectl` |CLI| commands:
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE