Install and Configure UMA for Kubernetes

apmsaas
You can use several methods to install and configure UMA for Kubernetes.
2
Configure the UMA through the Helm Chart
Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade Kubernetes applications. You configure the UMA using the Helm chart values.yaml file in the helm-chart/uma directory.
Prerequisite:
  • Ensure that Helm 3 is installed and properly configured in your Kubernetes setup.
Follow these steps:
  1. Go to the
    helm-chart/uma
    directory and open the
    values.yaml
    file in a text editor.
  2. Configure UMA using the
    values.yaml
    properties file:
    • agentManager.url
      The agent and Enterprise Manager connection details.
    • agentManager.credential
      The login credentials.
    • agentManager.version
      If the agent connects to a 10.7-version Enterprise Manager, set to
      10.7
      . Otherwise, ignore this property.
      Default: “”
    • agentManager.httpProxy.host
      The agent and Enterprise Manager connection proxy host, if applicable.
    • agentManager.httpProxy.port
      The agent and Enterprise Manager connection proxy port, if applicable.
    • agentManager.httpProxy.username
      The agent and Enterprise Manager connection proxy username, if applicable.
    • agentManager.httpProxy.password
      The agent and Enterprise Manager connection proxy password, if applicable.
    • clusterName
      The name of the Kubernetes cluster.
    • monitor.application.autoattach.filterType
      This property controls the operation mode for the AutoAttach extension.
      Values
      : whitelist, blacklist
      • Whitelist only attaches to processes marked with the environment variable
        CA_APM_MONITORING_ENABLED=true
        . Otherwise, these annotations are present at the Kubernetes Object level: Namespace, Deployment, Pod, and so on.
        ca.broadcom.com/autoattach.enabled=true
      • Blacklist attaches to all processes except those processes that are marked with the environment variable
        CA_APM_MONITORING_ENABLED=false
        ca.broadcom.com/autoattach.enabled=false
      Default:
      whitelist
    • monitor.application.autoattach.java.enabled
      Indicates whether the deep application monitoring AutoAttach extension is turned on or off for Java applications. Change this property to
      false
      if you want to disable this extension.
      Default:
      true
    • monitor.application.autoattach.java.propertiesOverride
      A comma-separated list of Java agent properties that you want to send as part of the AutoAttach extension configuration for Java applications. These properties are passed to the attached container as Java system properties. When configuring the agent, the environment variables take precedence, followed by the Java system properties, and lastly, the default agent profile properties.
      Alternatively, you can configure these properties using the
      ca.broadcom.com/autoattach.java.agent.overrides
      annotations which are present at the OpenShift Object level: Namespace, Deployment, Pod, and so on. The
      ca.broadcom.com/autoattach.java.agent.overrides
      annotation takes precedence over this value.
    • monitor.application.autoattach.java.dynamicPropertyResolution.hostName
      Sets the host name for a Java Agent that is attached to a Java application. This property consists of a prioritized list of schemes that UMA uses to determine the agent host name. UMA Dynamic Property Resolution resolves the schemes. Once UMA successfully resolves a scheme in the list, that scheme value is used as the agent host name. UMA ignores the rest of the schemes in the list as possible values.
      More information
      : UMA Dynamic Property Resolution
      Values
      : "{k8s_deployment_name},{k8s_daemonset_name},{k8s_pod_name},ContainerHost"
      These are the Kubernetes object attributes for deployment, daemonset, pod; and the literal string ContainerHost
      Default
      : "{k8s_deployment_name},{k8s_daemonset_name},{k8s_pod_name},ContainerHost"
      Notes
      :
      • The agent host name value that is provided by the
        monitor.application.autoattach.java.propertiesOverride
        property or annotation
        ca.broadcom.com/autoattach.java.agent.override
        property takes a precedence over this value.
      • The quotation marks around the listed values are required.
    • monitor.application.autoattach.java.dynamicPropertyResolution.agentName
      Sets the name for a Java Agent that is attached to a Java application. This property consists of a prioritized list of schemes that UMA uses to determine the agent name. UMA Dynamic Property Resolution tests and resolves the schemes. Once UMA successfully resolves a scheme in the list, that scheme value is used as the agent host name. UMA ignores the rest of the schemes in the list as possible values.
      More information:
      UMA Dynamic Property Resolution. When the value is empty value, UMA does not attempt to determine the agent name for the attached Java Agent.
      Default
      : "” (empty value)
      Notes
      :
      boolean : no
      • The agent host name value that is provided by the
        monitor.application.autoattach.java.propertiesOverride
        property or annotation
        ca.broadcom.com/autoattach.java.agent.override
        property takes a precedence over this value.
      • The quotation marks around the listed values are required.
    • monitor.application.autoattach.dotnet.enabled
      Indicates whether the deep application monitoring AutoAttach extension is turned on or off for .NET applications. Change this property to
      false
      when you want to disable this extension.
      Default:
      true
    • monitor.application.autoattach.dotnet.propertiesOverride
      A comma-separated list of.NET agent property that you want to send as part of the AutoAttach extension configuration for .NET applications. These properties are passed to the attached container as environment variables. When configuring the agent, the environment variables take precedence, followed by the default agent profile properties.
      Alternatively, you can configure these agent properties using the
      ca.broadcom.com/autoattach.net.agent.overrides
      annotations which are present at the OpenShift Object level: Namespace, Deployment, Pod, and so on. The
      ca.broadcom.com/autoattach.net.agent.overrides
      annotation takes precedence over this value.
    • monitor.application.jmx.enabled
      Indicates whether Kubernetes Remote JMX Agent monitoring is turned on or not. Change the value to
      false
      if you want to disable Remote JMX Agent monitoring.
      Default:
      true
    • monitor.container.dockerstats.enabled
      Indicates whether the Kubernetes node and cluster monitoring is turned on or off. This monitoring does not require any software to be pre-installed in the Kubernetes cluster. Change this property to
      false
      if you want to disable the Kubernetes node and cluster monitoring.
      Default:
      true
    • monitor.container.prometheus.exporter.enabled
      Indicates whether Kubernetes node and container monitoring through Prometheus exporters is turned on or off. To use this property, you must have the cAdvisor and the Node Prometheus exporters installed. Set the value to
      true
      for the Kubernetes node and container monitoring to occur through the Prometheus exporters only.
      Default:
      true
    • monitor.container.prometheus.backend.enabled
      Indicates whether Kubernetes Monitoring through the Prometheus server URL is turned on or off.
      Default
      :
      false
      More information
      : Prometheus Data Ingestion
    • monitor.container.prometheus.backend.endPoint.url
      The Prometheus Server endpoint URL string.
    • monitor.container.prometheus.backend.endPoint.username
      The username used to connect the Prometheus endpoint URL, if applicable. Leave the value blank if not needed.
    • monitor.container.prometheus.backend.endPoint.password
      The password used to connect the Prometheus endpoint URL, if applicable. Leave the value blank if not needed.
    • monitor.container.prometheus.backend.endPoint.token
      The token used to connect the Prometheus endpoint URL, if applicable. Leave the value blank if not needed.
    • monitor.container.prometheus.backend.endPoint.metricAlias
      Sometimes different versions of Prometheus exporters use different label names. For example, one version of cAdvisor generates the container name as
      container_name
      and another version generates the container name as
      container
      . This property is used to make that adjustment automatically. The values are a comma-separated list to indicate if there is any such adjustment is needed or not. For example, passing a value of
      container_name=container,pod_name=pod
      enables UMA to support the necessary version of cAdvisor exporter. The exporter emits
      container
      as the metric label to indicate the container name.
    • monitor.container.prometheus.backend.filter.name
      Indicates the key name of the label of the Prometheus Query. For example, you want to collect data for a specific Kubernetes object such as a namespace. You use the label key name here. This name along with the value of this key is appended to all queries.
    • monitor.container.prometheus.backend.filter.value
      Indicates the value of the label of the Prometheus key name. For example, you want to collect data for a specific Kubernetes object such a namespace. You use the label value here. This value along with the key is appended to all queries.
      If you want to monitor only the kube-system namespace, pass
      monitor.container.prometheus.backend.filter.name: namespace monitor.container.prometheus.backend.filter.value: kube-system
      .
    • monitor.clusterPerformance.enabled
      Indicates whether the Kubernetes Service Monitor through Prometheus exporters is turned on or off. To use this monitoring, you must have the Prometheus haproxy, coredns, or kube-state-metrics exporters installed.
      Default:
      true
    • type
      The type of deployment. For the Kubernetes cluster, this value must be
      Kubernetes
      .
      Values:
      Kubernetes, OpenShift
      Default:
      Kubernetes
  3. Save and close the file.
  4. Run Helm commands from the command line to install the UMA as a Helm Chart using one of these three methods.
    You can use
    caapm
    as the namespace name, or you can create your own namespace name. If you create your own name, substitute your name for
    caapm
    in these commands.
    • Pass values through Helm command line by running this command:
      helm install uma ./helm-chart/uma --set agentManager.url=$#agentManager.url.1# --set agentManager.credential=$#agentManager.credential# --namespace caapm
    • Set the
      agentManager.url
      and
      agentManager.credential
      properties, then run this command.
      helm install uma ./helm-chart/uma --set agentManager.url_1=
      YOUR_URL
      --set agentManager.credential=
      YOUR_TOKEN
      --namespace caapm
    • Upgrade the release of the existing UMA Helm Chart (when it is already installed) by running this command:
      helm upgrade --set autoattach.enabled=true uma ./helm-chart/uma --namespace caapm
  5. Make sure all the Kubernetes objects are properly installed. When you enable all the features, one daemonSet and three deployments are installed. Therefore, if you have an N-node cluster, the cluster contains  N+3 pods.
    The clusterinfo deployment is always installed. Usually the same Kubernetes object serves multiple features.
  6. Based on any Kubernetes features that you select, ensure that the associated Kubernetes object is created.
    This table shows the list of Kubernetes objects that are created for each feature.
    Kubernetes Objects Created for Each Feature
    Enabled Feature
    Kubernetes Object Type: Name
    monitor.application.autoattach.java.enabled
    DaemonSet: app-container-monitor
    monitor.application.autoattach.dotnet.enabled
    DaemonSet: app-container-monitor
    monitor.application.jmx.enabled
    DaemonSet: app-container-monitor
    monitor.container.prometheus.exporter.enabled
    DaemonSet: app-container-monitor
    monitor.container.prometheus.backend.enabled
    Deployment: cluster-performance-prometheus
    monitor.container.dockerstats.enabled
    DaemonSet: app-container-monitor
    Deployment: container-monitor
    monitor.clusterPerformance.enabled
    Deployment: cluster-performance-prometheus
  7. Examine metrics in the metric browser to be sure that all the UMA agents are properly connected to the Enterprise Manager. Use the information in this table to check the agent connection.
Information to the Check Agent Connection
Type
Name
Agent Name
DaemonSet
app-container-monitor
<
NodeName
>|<
ClusterName
>|Kubernetes Agent
This can be overridden by changing the host, process, and agent entries under
agentNaming.daemonset.apmia
in
values.yaml
.
Deployment
container-monitor
<ClusterName
>|ClusterDeployment|Infrastructure Agent
This can be overridden by changing the host, process. and agent entries under
agentNaming.deployment.apmia
in
values.yaml
.
Deployment
cluster-performance-prometheus
<
ClusterName
>|ClusterPerformanceMonitor|Prometheus Agent
This can be overridden by changing the host, process, and agent entries under
agentNaming.deployment.apmia
in
values.yaml
.
Uninstall the UMA through the Helm Chart
Use the following command to uninstall the UMA through the Helm Chart:
helm delete uma --purge
Install the Agent through the Kubernetes Operator File
If you do not have Helm that is installed in your Kubernetes setup, you can install the agent through the UMA Operator package. After you download the UMA Operator package, you install and configure the operator and custom resource definition.
The Kubernetes version must be 1.11.3 or higher. We recommended that you use the latest version.
Follow these steps:
  1. Download the UMA Kubernetes Operator package from here.
  2. Untar the package on a Kubernetes node by running this command:
    $ tar -xvf uma-operator.tar.gz
    DX APM
    . creates these files:
    • uma-operator/setup/uma_crd.yaml
    • uma-operator/setup/operator.yaml
    • uma-operator/setup/role.yaml
    • uma-operator/setup/role_binding.yaml
    • uma-operator/setup/service_account.yaml
    • uma-operator/uma_cr.yaml
  3. Run this command to install and create the
    caapm
    namespace:
    kubectl create ns caapm
  4. Install the UMA operator and service account in either the
    caapm
    namespace or a different namespace.
    1. Run this command to install in the
      caapm
      namespace:
      kubectl create -f ./uma-operator/setup/ -n caapm
    2. (Optional) Run this command to install the UMA operator and service account in a different namespace.
      kubectl create -f ./uma-operator/setup/ -n <
      new_namespace_name
      >
  5. Verify that the UMA operator and service account that is installed successfully.
    • Ensure that a pod is created and in
      Running
      state.
  6. Edit these
    uma-operator/uma_cr.yaml file
    properties as shown:
    agentManager.url: wss://apmgw.dxi-na1.saas.broadcom.com:443 agentManager.credential: eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhZ2VudCI6dHJ1ZSwiZXhwIjo5MjIzMzcyMDM2ODU0Nzc1LCJ0aWQiOjI5LCJqdGkiOiJkZjk5MGExYS1lZjYzLTRkOGMtOTViOC1kYjAzYWU5Yjk5N2MifQ.5x8evO0j6NKSnBr2XdAoQRQDSxPWxAYxBGcCCwP2OIUENkzCH0I1nzdK0kKjaLSkTo8eA2c5XvJSSDyK7-Z5ag
  7. Create the UMA operator custom resource definition in the
    caapm
    namespace or a different namespace.
    1. Run this command to create the UMA operator custom resource definition in the
      caapm
      namespace.
      kubectl create -f ./uma-operator/uma_cr.yaml -n caapm
    2. (Optional) Run this command to replace the
      caapm
      namespace with the new namespace name in the
      role_binding.yaml
      and
      operator.yaml
      file.
      kubectl create -f./uma-operator/uma_cr.yaml -n <
      new_namespace_name
      >
  8. Verify that the UMA operator custom resource definition was created based on what you selected in the uma_cr files.
    • Ensure that 
      DX APM
      created multiple pods.
Uninstall the Kubernetes Operator
Uninstall the operator
only
in the given order.
Follow these steps:
  1. At a command prompt, run these uninstall commands
    only
    in this order.
    1. kubectl delete -f ./uma-operator/uma_cr.yaml -n caapm
    2. kubectl delete -f ./uma-operator/setup/ -n caapm command
    3. kubectl delete ns caapm
  2. If any custom resources are blocked from deletion, patch the custom resources with null finalizers.
    1. At a command prompt, run this command:
      kubectl patch universalmonitoringagent/ca-universalmonitoringagent -p '{"metadata":{"finalizers":[]}}' --type=merge -n caapm
    2. After you patch the custom resources, run this command to uninstall everything:
      kubectl delete -f
Configure the UMA Operator Connections and Credentials
You configure the custom resource file first section with suitable agentManager URL and credential details. You can also configure other proxy resources.
Follow these steps:
  1. Navigate to the
    uma-operator
    directory and open the
    uma_cr.yaml
    file in a text editor.
  2. If necessary, add any of the following UMA Operator resources properties into the
    uma_cr.yaml properties
    file, then configure as appropriate for your environment.
    • .agentManager.credential Agent
      The agent and Enterprise Manager connection credentials.
    • .agentManager.httpProxy.host
      The agent and Enterprise Manager connection proxy host, if applicable.
    • .agentManager.httpProxy.password
      The agent and Enterprise Manager connection proxy password, if applicable.
    • .agentManager.httpProxy.port
      The agent and Enterprise Manager connection proxy port, if applicable.
    • .agentManager.httpProxy.username
      The agent and Enterprise Manager connection proxy username, if applicable.
    • .agentManager.url
      The Agent to Enterprise Manager connection URL.
    • .clusterName
      Name of the cluster.
    • .imageName
      Name of the image.
  3. Save and close the file.
Configure UMA Operator Agent Naming
You configure custom agent naming in the custom resource file.
Follow these steps:
  1. Navigate to the
    uma-operator
    directory and open the
    uma_cr.yaml
    file in a text editor.
  2. Configure the UMA Operator resources using the
    uma_cr.yaml properties
    file:
    • .agentNaming.deployment.apmia.host
      The customized agent name to display in the deployment host.
    • .agentNaming.deployment.apmia.process
      The customized agent name to display in the deployment process.
    • .agentNaming.deployment.apmia.agent
      The customized agent name to display as the deployment agent.
    • .agentNaming.deployment.prometheus.host
      The customized agent name to display in the Prometheus deployment host.
    • .agentNaming.deployment.prometheus.process
      The customized agent name to display in the Prometheus deployment process.
    • .agentNaming.deployment.prometheus.agent
      The customized agent name to display as the Prometheus deployment agent.
    • .agentNaming.daemonset.apmia.host
      The customized agent name to display in the daemonset host.
    • .agentNaming.daemonset.apmia.process
      The customized agent name to display in the daemonset process.
    • .agentNaming.daemonset.apmia.agent
      The customized agent name to display as the daemonset agent.
  3. Save and close the file.
Configure HTTP Collector
You configure custom agent naming in the custom resource file.
Follow these steps:
  1. Navigate to the
    uma-operator
    directory and open the
    uma_cr.yaml
    file in a text editor.
  2. Configure the UMA Operator resources using the
    uma_cr.yaml properties
    file:
    • .agentNaming.deployment.apmia.host
      The customized agent name is to display in the deployment host.
    • .agentNaming.deployment.apmia.process
      The customized agent name is to display in the deployment process.
    • .agentNaming.deployment.apmia.agent
      The customized agent name is to display as the deployment agent.
    • .agentNaming.deployment.prometheus.host
      The customized agent name is to display in the Prometheus deployment host.
    • .agentNaming.deployment.prometheus.process
      The customized agent name is to display in the Prometheus deployment process.
    • .agentNaming.deployment.prometheus.agent
      The customized agent name is to display as the Prometheus deployment agent.
    • .agentNaming.daemonset.apmia.host
      The customized agent name is to display in the daemonset host.
    • .agentNaming.daemonset.apmia.process
      The customized agent name is to display in the daemonset process.
    • .agentNaming.daemonset.apmia.agent
      The customized agent name is to display as the daemonset agent.
  3. Configure
    monitor.httpCollector.replicas
    with the desired HTTP Collector Agent instances.
  4. For HTTP Collector Agent Host and Process Name: Add configuration
    • For Host:
      agentNaming.deployment.httpCollector.host
    • For Process:
      agentNaming.deployment.httpCollector.process
    The Pod name is used for Agent Name and it is not customized due to multiple HTTP Collector Agent replicas.
  5. Ensure that the HTTP-collector
    monitor.httpCollector.enabled
    property is set as
    true
    as provided in the YAML file.
  6. Ensure that the ingress
    monitor.httpCollector.ingress.enabled
    is set as
    false
    as provided in the YAML file.
  7. Save and close the file.
Install the Agent through the YAML File
If you do not have Helm that is installed in your Kubernetes setup, you can install the agent though the YAML file. The UMA for Kubernetes YAML file contains a central configuration section (
ConfigMap
) where you specify the agent property values.
Follow these steps:
  1. Create a namespace that is named
    caapm
    by running this command in the Kubernetes cluster:
    kubectl apply -f ca-uma-agent.yaml -n caapm
  2. (Optional) Change the
    caapm
    namespace to the name of your choice.
    Important!
    : Ensure that you have created the
    caapm
    namespace before changing the name. You must create the namespace before installing the
    ca-uma-agent.yaml
    file.
    In Step 2a, we use
    new_namespace_name
    as an example namespace name.
    1. Run this command in the Kubernetes cluster:
      kubectl apply -f ca-uma-agent.yaml -n new_namespace_name
    2. Replace all occurrences of
      caapm
      as the namespace name with new namespace name in the
      ca-uma-agent.yaml
      file.
  3. Log into
    DX APM
    .
  4. In the lower section of the left side bar, select the
    Agents
    icon.
    The
    Settings
    screen opens.
  5. Select
    Download Agent
    .
    The
    Select Agent to Download
    screen opens.
  6. In the
    Unix/Linux
    section, select
    Universal Monitoring Agent.
  7. Download the
    YAML
    file.
  8. Navigate to the application YAML file and open the file in a text editor.
  9. Configure these properties in the
    caaiops-config-common ConfigMap
    section based on your setup:
    • agentManager_url_1
      The agent and Enterprise Management connection details.
    • agentManager_credential
      The login credentials.
    • cluster_name
      The Kubernetes cluster name.
Uninstall the Agent through the YAML File
If you installed the agent through the YAML file, you can uninstall the agent using this command:
kubectl delete -f ca-uma-agent.yaml -n caapm
Deploy UMA with an Existing Java Agent
UMA includes the Java Agent, and the AutoAttach capabilities apply to that Java Agent. You might have already deployed a Java Agent using another means, such using APM Command Center. In this case, you can deploy the existing Java Agent or .NET Core 3.1 and higher agent with UMA. This deployment allows the existing Java or .NET Agent to use the AutoAttach extension to attach the agent to running applications.
Prerequisite: An existing Java Agent not deployed using UMA.
Follow these steps:
  1. Navigate to the existing Java Agent location.
  2. Create a
    tar.gz
    file to package the existing Java Agent.
    For example, you name the file
    agent.tar.gz
    . You use this command to package the Java Agent, where the
    <Agent_Home>
    directory is the root. In this example,
    <Agent_Home>
    is the
    wily
    directory.
    tar -tf agent.tar.gz ./wily/ ./wily/Agent.jar and so on.
  3. Run this command to create a Docker file.
    FROM caapm/universalmonitoragent:latest RUN rm -fR /usr/local/openshift/apmia/java-agent/wily COPY agent.tar.gz /usr/local/openshift/apmia/java-agent
  4. Run this command to build the Docker image.
    docker build -t [registry_url]/caapm/universalmonitoragent:latest-custom
  5. Run this command to push the Docker image to your internal registry.
    docker push [registry_url]/caapm/universalmonitoragent:latest-custom
  6. Use the new Docker image containing the existing Java Agent when you deploy UMA.
UMA Dynamic Property Resolution
UMA can dynamically determine the values of certain properties. These properties are named UMA dynamic properties.
You configure UMA dynamic property values in the UMA yaml file. These properties allow you to configure Java Agent properties that are based on the Kubernetes object metadata. The metadata includes attributes, labels, annotations, and environment variables.
You use UMA dynamic properties when configuring the AutoAttach extension for the Java Agent. UMA dynamic properties use a list of schemes containing literal strings and Kubernetes object attributes. UMA evaluates the schemes in list order to find and resolve the value. When UMA cannot resolve any of the schemes in the list, UMA does not set the value for the attached Java Agent property. Instead, the Java Agent uses its own default value.
UMA follows this process to dynamically resolve a property value:
  1. UMA attempts to resolve the first scheme in the property list.
  2. UMA dynamically tries to resolve each attribute specified in the scheme.
  3. When UMA can resolve all the attributes, a scheme is considered successful. UMA uses the resolved value for the property value. UMA ignores the rest of the schemes in the property list.
  4. When UMA cannot resolve all the attributes, the scheme is unresolved. UMA attempts to resolve the next scheme in the property list.
  5. UMA evaluates schemes in list order until a scheme is resolved.
UMA Dynamic Property Resolution Schemes
You set UMA dynamic property values as a comma-separated list of schemes in this format:
property="{scheme1},{scheme2},{scheme3},{schemeN}"
You compose each scheme from zero or more each of literal string values and Kubernetes object attributes. Each scheme is enclosed within braces. The beginning and end of the entire property value is surrounded by quotation marks.
A Kubernetes object attribute is a piece of metadata about a container that is running in a Kubernetes cluster. An example is
k8s_namespace_name
. We explain more about Kubernetes objects attributes in the Kubernetes Object Attributes section.
Here are some scheme examples:
Example Scheme 1: One literal string
property="Foo"
Example Scheme 2: One Kubernetes object attribute
property="{k8s_deployment_name}"
Example Scheme 3: Three Kubernetes object attributes and one literal string
property="{k8s_deployment_name},{k8s_daemonset_name},{k8s_pod_name},ContainerHost"
UMA-Supported Kubernetes Object Attributes
A Kubernetes object attribute is a piece of metadata about a container that is running in a Kubernetes cluster. You use these attributes when configuring schemes in UMA dynamic properties. Here are the descriptions of the supported Kubernetes object attribute and how 
DX APM
resolves the attributes.
UMA-Supported Kubernetes Object Attributes
Attribute
Description
k8s_container_name
Resolves the name of the container running the Java application.
k8s_container_env_key
Resolves to the value of the
key
environment variable for the container running the Java application.
k8s_replicaset_name
Resolves the name of the ReplicaSet associated with the container running the Java application.
k8s_replicaset_labels_key
Resolves to the value of the label named
key
for the ReplicaSet. The selected ReplicaSet is associated with the container running the Java application.
k8s_replicaset_annotations_key
Resolves to the value of the annotations named
key
for the ReplicaSet. The selected ReplicaSet is associated with the container running the Java application.
k8s_deployment_name
Resolves the name of the Deployment that is associated with the container running the Java application
k8s_deployment_labels_key
Resolves to the value of the labels named
key
for the Deployment. The selected Deployment is associated with the container running the Java application.
k8s_deployment_annotations_key
Resolves to the value of the annotations named
key
for the Deployment. The selected Deployment is associated with the container running the Java application.
k8s_service_name
Resolves the name of the Service that is associated with the container running the Java application
k8s_service_labels_key
Resolves to the value of the labels named
key
for the Service. The selected Service is associated with the container running the Java application.
k8s_service_annotations_key
Resolves to the value of the annotations named
key
for the Service. The selected Service is the one, associated with the container running the Java application.
k8s_namespace_name
Resolves the name of the namespace that is associated with the container running the Java application.
k8s_namespace_labels_key
Resolves to the value of the labels named
key
for the namespace. The selected namespace is associated with the container running the Java application.
k8s_namespace_annotations_key
Resolves to the value of the annotations named
key
for the namespace. The selected namespace is associated with the container running the Java application.
UMA Dynamic Property Resolution Examples
When you configure the UMA dynamic resolution properties using UMA-supported Kubernetes objects, UMA resolves the Kubernetes objects. You can set these properties:
  • monitor.application.autoattach.java.dynamicPropertyResolution.hostName
  • monitor.application.autoattach.java.dynamicPropertyResolution.agentName
Use the following examples to help you configure these properties for your environment.
You have deployed the
tixchange
application in a Kubernetes container. Here is the Kubernetes metadata:
Deployment name: tixchange
Kubernetes labels:
  • zone: us-east-1
  • profile: prod
Container environment variable
  • version: v123
Examples using monitor.application.autoattach.java.dynamicPropertyResolution.hostName
When you configure the property as shown:
monitor.application.autoattach.java.dynamicPropertyResolution.hostName="{k8s_deployment_name}"
UMA resolves the Deployment name as
tixchange
.
When you configure the property as shown:
monitor.application.autoattach.java.dynamicPropertyResolution.hostName="app-{k8s_deployment_name}-{k8s_deployment_labels_zone}"
UMA resolves the
app
literal string, Deployment name, and zone label as
app-tixchange-us-east-1
.
When you configure the property as shown:
monitor.application.autoattach.java.dynamicPropertyResolution.hostName="{k8s_deployment_name}-{k8s_container_env_version}"
UMA resolves the Deployment name and container environment variable for
version
as
tixchange-v123
.
Example using monitor.application.autoattach.java.dynamicPropertyResolution.agentName
When you configure the property as shown:
monitor.application.autoattach.java.dynamicPropertyResolution.agentName="{k8s_deployment_name}Agent"
UMA resolves the Deployment name and literal string
Agent
as
tixchangeAgent
.
UMA Dynamic Property Resolution FAQ
Q: What is the default host name value when none of the naming schemes can be resolved?
A: ContainerHost