Blog Feed

How To Discover ExaCC/ExaCS in Enterprise Manager – Detailed

In this post I want to share the required steps to discover ExaCC or ExaCS in Oracle Enterprise Manager (EM). I’ll provide as much detail as possible.

  1. Make sure your EM is at least on 13.5 RU 16. If not, please apply RU 16 to EM. Follow this link if you want to know how to patch EM. HERE!
  2. You need to designate a “monitoring” agent for the discovery. Is recommended that this agent sits outside your Exadata rack and has access to the OCI REST APIs. More information HERE!
  3. Make sure this agent has both the Database and the Exadata plugins deployed and is patched to the same RU level as the OMS. HERE!
  4. The agent must be able to reach the OCI REST APIs. There are 3 ways to achieve this.
    a) You can use a Proxy
    b) You can use the OCI Management Gateway
    c) You can have direct network connection

    Below is a list of APIs that you will need access to:
    Either grant access to (*.oci.oraclecloud.com) or individual URLS:
    https://query.<oci_region>.oci.oraclecloud.com
    https://identity.<oci_region>.oci.oraclecloud.com
    https://database.<oci_region>.oci.oraclecloud.com
    https://wss.exacc.<oci_region>.oci.oraclecloud.com
    https://management-agent.<oci_region>.oci.oraclecloud.com
    https://certificatesmanagement.<oci_region>.oci.oraclecloud.com
    https://certificates.<oci_region>.oci.oraclecloud.com
    https://telemetry-ingestion.<oci_region>.oci.oraclecloud.com
    https://auth.<oci_region>.oci.oraclecloud.com
    https://objectstorage.<oci_region>.oci.oraclecloud.com

    You may also test that connectivity by executing:
    $ curl -v https://query.<oci_region>.oci.oraclecloud.com
  5. Create or use an OCI account that has access to read your Exadata racks. These are the policies I have used:
    allow group <domain/group name> to read database-family in compartment <compartment name>
    You can find more information about the required policies HERE!
  6. Setup API Keys for authentication. Please follow instructions in the MOS note below:
    EM13.5: Manage OCI Connectivity Named Credentials Test Failed with Invalid Private Key. (Doc ID 2792126.1)
  7. Create an EM Named Credential using the API Keys created on the previous step. More details HERE!
  8. You also need the proper Storage Server credentials. If you don specify them during the discovery process, Storage Server targets will not be discovered. Instructions on how to retrieve this credential can be found HERE!
    If you can’t retrieve these credentials, please open a Service Request with support.
  9. Discover your Exadata Infrastructure following the discovery wizard. More information HERE!

After finishing the discovery, you should have your Exadata Cloud target in EM. Please wait between 15 to 20 minutes for the target information to be populated.

If your network setup does not allow direct connectivity to the OCI REST APIs then you will have to use an internal Proxy or use the OCI Management Gateway. More information about the OCI Management Gateway can be found HERE!

There’s also a very good post from Simone about setting up the Management Gateway for the EM agents.
https://blogs.oracle.com/observability/post/setup-oem-agents-cloud-mgmt-gw

Update:
You will also have to modify 2 parameters. One at the OMS level and one at the agent level. Please follow below MOS notes:

OEM 13c : SSH Key Credential Test On a Host Target Fails With “Remote operation timed out.” (Doc ID 2415262.1)
EM 13.2: Agent Patching From EM Console Fails With Error “concurrent job tasks limit (50) has been reached” (Doc ID 2476803.1)

Thanks,
Alfredo

Extract OCI Monitoring Metrics Using REST APIs

In my previous post I showed you how you can create custom metrics in order to monitor a Standby database. Those steps detailed how to ingest (post) metrics to the OCI Monitoring service.



In this post I’ll show you how you can list metric definitions and how you can extract metric data from the Monitoring service. This metric extraction is really useful when you need to integrate the Monitoring service with 3rd party tools.

Step 1 – Prerequisites

Please follow all the prerequisites executed in the previous blog.

Step 2 – List Metrics

In this step we are going to create the Python script that list our custom metric.

Copy the code below and paste it into a file name list_metric.py

#!/usr/bin/python3

# This is an automatically generated code sample.
# To make this code sample work in your Oracle Cloud tenancy,
# please replace the values for any parameters whose current values do not fit
# your use case (such as resource IDs, strings containing EXAMPLE or unique_id, and
# boolean, number, and enum parameters with values not fitting your use case).

import oci

# Create a default config using DEFAULT profile in default location
# Refer to
# https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm#SDK_and_CLI_Configuration_File
# for more info
config = oci.config.from_file()


# Initialize service client with default config file
monitoring_client = oci.monitoring.MonitoringClient(config)


# Send the request to service, some parameters are not required, see API
# doc for more info
list_metrics_response = monitoring_client.list_metrics(
    compartment_id="<YOUR COMPARTMENT ID>",
    list_metrics_details=oci.monitoring.models.ListMetricsDetails(
        name="<YOUR METRIC NAME>",
        namespace="<YOUR CUSTOM NAMESPACE>"))


# Get the data from response
print(list_metrics_response.data)

Amend the inputs needed depending on your METRIC and OCI configuration:

  • <YOUR METRIC NAME>
  • <YOUR CUSTOM NAMESPACE>
  • <YOUR COMPARTMENT ID>

Let’s execute our script:

$ ./list_metric.py
[{
  "compartment_id": "<YOUR COMPARTMENT ID>",
  "dimensions": {
    "server_id": "<YOUR SERVER ID>"
  },
  "name": "<YOUR METRIC NAME>",
  "namespace": "<YOUR CUSTOM NAMESPACE>",
  "resource_group": null
}]

Step 3 – Query Metric Data

In this step we are going to create a Python script that query metric data.

Copy the code below and paste it into a file name query_metric.py

#!/usr/bin/python3

# This is an automatically generated code sample.
# To make this code sample work in your Oracle Cloud tenancy,
# please replace the values for any parameters whose current values do not fit
# your use case (such as resource IDs, strings containing EXAMPLE or unique_id, and
# boolean, number, and enum parameters with values not fitting your use case).

import oci
from datetime import datetime

# Create a default config using DEFAULT profile in default location
# Refer to
# https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm#SDK_and_CLI_Configuration_File
# for more info
config = oci.config.from_file()


# Initialize service client with default config file
monitoring_client = oci.monitoring.MonitoringClient(config)


# Send the request to service, some parameters are not required, see API
# doc for more info
summarize_metrics_data_response = monitoring_client.summarize_metrics_data(
    compartment_id="<YOUR COMPARTMENT ID>",
    summarize_metrics_data_details=oci.monitoring.models.SummarizeMetricsDataDetails(
        namespace="<YOUR CUSTOM NAMESPACE>",
        query="<YOUR METRIC NAME>[1m].mean()",
        start_time=datetime.strptime(
            "2023-10-11T14:38:22.574Z",
            "%Y-%m-%dT%H:%M:%S.%fZ"),
        end_time=datetime.strptime(
            "2023-10-11T17:27:08.471Z",
            "%Y-%m-%dT%H:%M:%S.%fZ"),
        resolution="5m"),
    compartment_id_in_subtree=False)

# Get the data from response
# Get the data from response
string = ' '.join([str(item) for item in summarize_metrics_data_response.data])
json_object = json.loads(string)
print(json_object["aggregated_datapoints"])

Amend the inputs needed depending on your METRIC and OCI configuration:

  • <YOUR CUSTOM NAMESPACE>
  • <YOUR COMPARTMENT ID>
  • <YOUR METRIC NAME>

Let’s execute our script:

$ ./query_metric.py
[{'timestamp': '2023-10-11T14:41:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T14:46:00+00:00', 'value': 0.016666667}, {'timestamp': '2023-10-11T14:51:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T14:56:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:01:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:06:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:11:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:16:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:21:00+00:00', 'value': 0.016666667}, {'timestamp': '2023-10-11T15:26:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:31:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:36:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:41:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:46:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T15:51:00+00:00', 'value': 0.016666667}, {'timestamp': '2023-10-11T15:56:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:01:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:06:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:11:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:16:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:21:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:26:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:31:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:36:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:41:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:46:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T16:51:00+00:00', 'value': 0.033333333}, {'timestamp': '2023-10-11T16:56:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T17:01:00+00:00', 'value': 0.016666667}, {'timestamp': '2023-10-11T17:06:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T17:11:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T17:16:00+00:00', 'value': 0.0}, {'timestamp': '2023-10-11T17:21:00+00:00', 'value': 0.016666667}]

Now that we have extracted our metric data set, we can spool it off to a local file or ingest it into a 3rd party tool.

I also want to highlight another blog from Kay Singh about Monitoring metric extraction.



Hope this blog shows you how to extract Monitoring metric data from OCI.

Thanks,
Alfredo

Monitor Oracle Standby Databases In OCI With Custom Metrics

Monitoring Oracle Standby Database (Data Guard) has been always a tricky task. Just by the nature of them (the fact that the instance is not open in read-write mode) is hard to gather information about them. Even using specialized tools like Oracle Enterprise Manager requires SYSDBA credentials in order to effectively monitor them. But what about when running them on OCI?

Oracle OCI Monitoring service allow us to monitor cloud resources using metrics and alarms.

Oracle OCI Monitoring Service Architecture

In this post I want to show you how you can create a custom metric to monitor the “Apply Lag” on your Oracle Standby database, so you can create an alarm if it crosses a threshold.

I’m going to follow most of the steps detailed by Liu-Wei on this post “https://qiita.com/liu-wei/items/5e8e04f1e58cc6406ca9” .

Step 1 – Prerequisites

Add an API Key

First of all. You will have to designate an OCI user that has the proper permissions to access the Monitoring Service metrics and post them using custom metrics. This could be your account or a service account. Once you have designated this user, then login to the OCI console and choose the region where the Standby DB resides. Then click on the Profile icon and click on the account name.

Once there, scroll down and click on API Keys from the left menu.

Then click on the Add API Keys button.

Then generate the API Keys, save them nd store them in a secure place and click Add.

This will allow the script to login to the Monitoring Service in order to post custom metric data.

Create an OCI configuration file

For this exercise we will use the API Keys we just generated and we will create a config file in the host where the Standby DB is running using the oracle account.

I used the location /home/oracle/.oci in order to store the OCI config file and the private key. You may use another location depending on your internal standards.

Using the Configuration File Preview copy the contents and save them in the configuration file we are creating in the DB host.

This Preview already has the correct setting for the user, fingerprint, tenancy and region. However, you should amend the key_file setting. This setting is the path where your private key file is stored.

For this exercise it will be:

key_file=/home/oracle/.oci/mykey_private.pem

At the end of this, you should have 2 files. The config file and the private key file in the DB host.

[oracle]$ ls
config mykey_private.pem

Setup Python on DB Host

For this exercise we are going to use Python in order to consume the required REST APIs to post the metric data to the Monitoring Service.

Verify the Python installation on the DB Host using the oracle account. Python3 was already installed on this host.

[oracle]$ which python3
/bin/python3

However we need to install the oci module in Python. Before we install the oci module we need to upgrade pip in Python.

For this, logout from the oracle account and use the opc account. Execute the command below:

[opc]$ sudo pip3 install --upgrade pip

Login again with the oracle account and execute:

[oracle]$ pip3 install -U oci

This should install the oci module correctly.

Step 2 – Create the Python script

In this step we are going to create the Python script that connects to the Standby DB, gathers the Apply Lag and posts the data to the Monitoring service.

Copy the code below and paste it into a file name post_lag_value.py

#!/usr/bin/python3

# This is a sample python script that post a custom metric(lag_value) to oci monitoring.
# Run this script on the client that you want to monitor.
# Command: python post_lag_value.py

import oci,subprocess,os,datetime
from pytz import timezone

# using default configuration file (~/.oci/config)
from oci.config import from_file
config = from_file()

# initialize service client with default config file
monitoring_client = oci.monitoring.MonitoringClient(config,service_endpoint="https://telemetry-ingestion.us-ashburn-1.oraclecloud.com")

os.environ['ORACLE_HOME'] = "<YOUR ORACLE HOME>"
os.environ['ORACLE_SID'] = "<YOUR SID>"

def run_sqlplus(sqlplus_script):

    """
    Run a sql command or group of commands against
    a database using sqlplus.
    """

    p = subprocess.Popen(['<YOUR ORACLE HOME>/sqlplus','-s','/nolog'],stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,stderr=subprocess.PIPE)
    (stdout,stderr) = p.communicate(sqlplus_script.encode('utf-8'))
    stdout_lines = stdout.decode('utf-8').split("\n")

    return stdout_lines

sqlplus_script="""
connect / as sysdba
set heading off
SELECT extract(day from p.val) *1440 + extract(hour from p.val)*60 +
extract(minute from p.val) + extract(second from p.val)/60 lag_minutes
from (SELECT name,to_dsinterval(value) val from v$dataguard_stats where name ='apply lag') p;
exit
"""

sqlplus_output = run_sqlplus(sqlplus_script)

for line in sqlplus_output:
     if line.strip():
         lag_value=float(line)

print(lag_value)

times_stamp = datetime.datetime.now(timezone('UTC'))

# post custom metric to oci monitoring
# replace "compartment_ocid string with your compartmet ocid
post_metric_data_response = monitoring_client.post_metric_data(
    post_metric_data_details=oci.monitoring.models.PostMetricDataDetails(
        metric_data=[
            oci.monitoring.models.MetricDataDetails(
                namespace="<YOUR CUSTOM NAMESPACE>",
                compartment_id="<YOUR COMPARTMENT ID>",
                name="<YOUR METRIC NAME>",
                dimensions={'server_id': '<YOUR SERVER ID>'},
                datapoints=[
                    oci.monitoring.models.Datapoint(
                        timestamp=datetime.datetime.strftime(
                            times_stamp,"%Y-%m-%dT%H:%M:%S.%fZ"),
                        value=lag_value)]
                )]
    )
)

# Get the data from response
print(post_metric_data_response.data)

Amend the inputs needed depending on your DB and OCI configuration:

  • <YOUR ORACLE HOME>
  • <YOUR SID>
  • <YOUR CUSTOM NAMESPACE>
  • <YOUR COMPARTMENT ID>
  • <YOUR METRIC NAME>
  • <YOUR SERVER ID>

One important thing to mention is the ingestion service endpoint. I’m using Ashburn as my region, therefore my ingestion endpoint is “https://telemetry-ingestion.us-ashburn-1.oraclecloud.com”. Yours should be different depending on your region.

https://docs.oracle.com/en-us/iaas/api/#/en/monitoring/20180401/

Next, let’s make the post_lag_value.py file executable.

[oracle]$ chmod +x post_lag_value.py

Let’s try our Python script.

./post_lag_value.py 
/home/oracle/.local/lib/python3.6/site-packages/oci/_vendor/httpsig_cffi/sign.py:10: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography. The next release of cryptography (40.0) will be the last to support Python 3.6.
  from cryptography.hazmat.backends import default_backend  # noqa: F401
0.0
{
  "failed_metrics": [],
  "failed_metrics_count": 0
}

As you can see from the output of the file, the current lag is “0.0” minutes and the failed_metrics_count is also “0”. This means that we successfully posted this data to the Monitoring service.

Let’s now find out if our custom metric is visible from the OCI console.

Using the hamburger menu navigate to “Observability & Management” and under the Monitoring Service click on Metrics Explorer.

Inside Metrics Explorer choose the correct Compartment, Namespace and metric. Remember that you provided them in the Python script. Verify you can see data in the graph.

The script is now posting Apply Lag data to the monitoring service.

Step 3 – Schedule

Now we need to schedule the execution of our Python script every “x” minutes. For this I’m using a Cron job. Follow the instructions in the MOS note to enable Cron. How To Use Crontab In OCI DBCS? (Doc ID 2639985.1)

My Cron looks as follows:

[opc]$ sudo cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root

# For details see man 4 crontabs

# Example of job definition:
# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name  command to be executed


# System should configure AIDE for Periodic Execution 
05 4 * * * root /usr/sbin/aide --check

*/5 * * * * oracle /home/oracle/post_lag_value.py >> post_lag.log 2>&1

I schedule this Cron job every 5 minutes. You may adjust it to your desired frequency.

Step 4 – Create an Alarm

Go to the Monitoring service and create an Alarm using the Alarm Definitions option.

After this, we will have a notification when the Apply Lag is more than 60 minutes in our Standby DB.

This concludes this small exercise of monitoring the Apply Lag for a Standby Oracle Database using the OCI Monitoring service.

Hope this helps,
Alfredo

Maximize Your Exadata Cloud at Customer Efficiency Using Oracle Enterprise Manager 13.5

Oracle Enterprise Manager (EM) it’s been the go-to tool to monitor and manage on-premises Exadata deployments for quite some time. By doing a quick search in this blog I found presentations dated back from 2019 about choosing EM 12c or 13c to monitor and manage Exadata. Time has gone quick and now it’s time to discuss a new generation of Cloud services.

During the recent years Exadata appliances have evolved into Exadata Cloud Service (ExaCS) and Exadata Cloud @ Customer (ExaCC). Customers have the same monitoring requirements though. It is more than necessary to have proper observability mechanisms in place for these cloud services.

This post is meant to guide you through the process of discovering your Exadata Cloud @ Customer services into Oracle Enterprise Manager 13.5

First things first. You can find all the required information in the EM 13.5 documentation.



One of the questions that I get from customers is, what are the features in EM for your ExaCC deployment? Here are some of them:

  • Automatically identify and organize related targets
  • Visualize the database and related targets associated with ExaCC
  • Monitor performance for ExaCC components like Storage Grids, ExaCC DomU, GI and database targets on a single pane of glass

For more information about these features, refer to the Features section in the Oracle documentation.

Let’s get now into the discovery process. There are some pre-reqs that you will have to meet before attempting the discovery process. Please read carefully Part II of the Oracle documentation. Here are some highlights:

  • Create Named Credentials for OCI
    In this step you will generate the required OCI API Signing Keys in order to create an EM Named Credential. Follow the instructions in this section but also look at below MOS note:
    EM13.5: Manage OCI Connectivity Named Credentials Test Failed with Invalid Private Key. (Doc ID 2792126.1)
  • Discover Exadata C@C
    On this step we will discover ExaCC in EM. Staring with EM 13.5 RU14 you can discover it using the UI (console). Before that the only available option was to use CLI. I strongly recommend you apply RU14 before attempting the discovery process.
    Follow the steps on the documentation in order to discover your ExaCC service.

During the discovery process you will have to designate or install an EM agent that will be responsible of monitoring your ExaCC service. This agent must have access to Oracle Cloud Infrastructure (OCI).

Open the Discovery Wizard in EM

On the “Add Targets Manually” page choose the guided process.

Select “Oracle Exadata Infrastructure” and click the “Add” button.

On the “Exadata Cloud Discovery” wizard select the required inputs:

  • Management Agent
  • Backup Agent (optional)
  • Agent’s Named Credential
  • OCI Cloud Home Region
  • OCI Named Credential (created on the pre-reqs section)
  • OCI Subscribed Region (where your tenancy resides)

After this, click on the “Next” button.

On the “Discovery” section of the wizard, EM will discover all the ExaCC related targets. On this page, set the “Storage Server Credentials” in order to monitor the Storage Servers. Click “Next”

On the next page a summary will be displayed. Verify all the targets and then click “Submit”.

At this point you will have your ExaCC service, discovered in EM. You can see all your ExaCC services by navigating to “Targets” -> “Exadata”.

The “Exadata” dashboard shows all your Exadata appliances and services. Click on the recently discovered ExaCC service.

The “Home Page” shows all ExaCC components along with resource utilization.

Use the “Overview Tab” in order to navigate on all the available monitoring sections. Below screenshot provides an insight into “CPU” utilization by selecting the “Resource” section.

Follow the Oracle documentation in order to understand all the available options and sections in the ExaCC Home Page.

At this point your ExaCC is discovered and ready to be managed and monitored.

Next steps are to setup your monitoring metrics, thresholds and rules in order to receive notifications. Also look at the available OAS reports available for your ExaCC.

We will cover the integration of this EM monitoring and management of ExaCC with OCI Observability & Management services on future posts. These O&M services like Operations Insights help you to forecast utilization, provide advanced capacity planning insights by using Machine Learning algorithms.

Update!!!: Royce Fu just created a wonderful post about EM and O&M integration for ExaCC. Please find the link below.



Thanks,
Alfredo

How To Monitor Oracle DBs On Docker Containers With EM 13c

This post continues the thread of DevOps and automation for the Oracle Database. During the last couple of months I’ve seen more interest in deploying Oracle Databases on Docker containers. Even more now that Oracle released full support of RAC Databases running on Docker.



After you deploy your first Oracle Database on Docker, several questions come into mind; like:

  • How do I monitor the status of the Oracle Database?
  • How do I manage the performance of the Oracle Database and the SQL’s being executed on it?
  • How do I alert on issues on a timely manner?

Well, if you have the same questions or concerns… you are not alone!

On this post, I’m trying to explain the process I followed in order to deploy an Oracle Enterprise Manager agent on a Docker container and how I do monitor and manage the Oracle Database running on it.

First things first. Just a quick recap of my environment.

  • Oracle Linux 7 host running Docker 19.03
  • Create a MACVLAN network on Docker. My network adapter is ens3 on the host running Docker.
sudo docker network create -d macvlan --subnet=192.168.56.0/24 --gateway=192.168.56.1 -o parent=ens3 orcl_nw
  • Create a Docker volume type “nfs” using an NFS share named /NFSSHARE
sudo docker volume create --driver local --opt type=nfs --opt o=addr=<nfs server>,rw,bg,hard,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 --opt device=:/NFSSHARE agentstorage
  • Pull the 19.3 Docker image from container-registry.oracle.com
sudo docker pull container-registry.oracle.com/database/enterprise:19.3.0.0
  • Create the Docker container using the recently pulled image and mounting the volume “agentstorage” into “/u01” inside the container
sudo docker create -t -i \
--name ol7-orcl --hostname ol7-orcl --user oracle --ip 192.168.56.101 \
-e ORACLE_SID=ORCL \
-e ORACLE_PDB=PDB1 \
-v agentstorage:/u01 \
container-registry.oracle.com/database/enterprise:19.3.0.0
  • Disconnect the bridged Docker network from the container
sudo docker network disconnect bridge ol7-orcl
  • Connect the container to the MACVLAN network
sudo docker network connect orcl_nw --ip 192.168.56.101 ol7-orcl
  • Start our Docker container
sudo docker start ol7-orcl
  • Review the startup process and the creation of the database
sudo docker logs -f ol7-orcl

At this point we have our Docker container and the ORCL database up and running. But now I need a way to connect my ORCL database to the external world. Because I’m running a MACVLAN, I can create another Docker container in the same network and install Oracle Enterprise Manager (EM) in order to monitor it. Or I can also install Oracle EM on the host running Docker and setup a network link and a route to my Docker container. Let’s do the second option.

sudo ip link add mydocker-net link ens3 type macvlan mode bridge
sudo ip addr add 192.168.56.1/32 dev mydocker-net
sudo ip link set mydocker-net up
sudo ip route add 192.168.56.0/24 dev mydocker-net

I now have connectivity between my host and the Docker container.

$ ping 192.168.56.101
PING 192.168.56.101 (192.168.56.101) 56(84) bytes of data.
64 bytes from 192.168.56.101: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 192.168.56.101: icmp_seq=2 ttl=64 time=0.070 ms
64 bytes from 192.168.56.101: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.56.101: icmp_seq=4 ttl=64 time=0.073 ms
64 bytes from 192.168.56.101: icmp_seq=5 ttl=64 time=0.070 ms

sh-4.2$ ping 192.168.56.1
PING 192.168.56.1 (192.168.56.1) 56(84) bytes of data.
64 bytes from 192.168.56.1: icmp_seq=1 ttl=64 time=0.064 ms
64 bytes from 192.168.56.1: icmp_seq=2 ttl=64 time=0.079 ms
64 bytes from 192.168.56.1: icmp_seq=3 ttl=64 time=0.043 ms
64 bytes from 192.168.56.1: icmp_seq=4 ttl=64 time=0.082 ms
64 bytes from 192.168.56.1: icmp_seq=5 ttl=64 time=0.077 ms

The next step is to have name resolution setup between both hosts. For the Docker container it was easy to setup the full host name into the external DNS service. For the host resolution inside the Docker container you can use static entries in the /etc/hosts file or use the Docker embedded DNS server. For my exercise, I setup an entry in my /etc/hosts file providing the host name and the IP address of the host where EM is running.

The rest of the process is very straight forward. I use the silent install option for the Oracle EM agent. Below link has the steps to download the agent binaries for your specific platform and version.



Once you have the ZIP file of the agent binaries. Then just copy it inside the NFS share and connect to the Docker container.

sudo docker exec -it --user oracle ol7-orcl /bin/sh

After you login to the container just follow the steps in the provided agent install guide. Update the response file with the required information and run agentDeploy.sh. Make sure that the Agent Home resides in the Docker volume created. For this exercise will be inside /u01 directory.

After the agent deploy succeeds disconnect from the Docker session and re-connect as root.

sudo docker exec -it --user root ol7-orcl /bin/sh

Execute root.sh inside the Agent Home directory.

Let’s now verify that our EM has a new host target named ol7-orcl and is up and running.

Docker Container ol7-orcl on Oracle Enterprise Manager

The next step is to discover the ORCL database in EM. You can change the DBSNMP password by logging to the Docker container, then use SQLPlus to change the password and unlock the account.

alter user dbsnmp identified by <password> account unlock;

After the database discovery, verify the database target in the EM console.

ORCL database home page

Now that you have both the host target and the database target you can set metrics, thresholds and notifications on them.

Another important option is that if you have Diagnostics and Tuning packs, you can also monitor performance, get AWR/ASH reports and use EM to graphically analyze performance issues.

ORCL performance information

Hope this post helps understand the options you have in order to setup the required monitoring for the Oracle databases running on Docker containers.

Thanks,
Alfredo

Oracle Enterprise Manager 13c Series – Patching

I wanted to create a video series covering the basics of Oracle Enterprise Manager 13c. These series are designed to provide insights on how to install, configure and manage Oracle Enterprise Manager 13.5.

This week we are going to cover the patching of Oracle Enterprise Manager 13.5.

Oracle Enterprise Manager 13c – Patching

Hope this is useful.

Thanks,
Alfredo

Patch Oracle Databases With Ansible and Enterprise Manager 13c

In a previous post I show you how you can integrate DevOps automation and orchestration tools to provision Oracle databases by leveraging Enterprise Manager (EM) and the Cloud Management Pack (CMP). Once provisioned, databases need to be fully maintained in terms of monitoring but most precisely patching.

Patching databases decreases the risk of breach by timely mitigating vulnerabilities, but in can be a daunting task for organizations. Manual patching is time consuming and error prone. Home grown scripts are difficult to maintain and they increase maintenance cost. So the question is, how can I automate the patching process and even better, how can I integrate it with my current orchestration workflow?

Let me explain how you can achieve all this by making use of Oracle’s Database Lifecycle Management Pack (DBLM) and CMP. DBLM’s Fleet Maintenance help us patch and upgrade Oracle databases at scale with minimum downtime. It makes use of Gold Images and out-of-place patching in order to achieve this.

Fleet Maintenance Benefits

All this functionality can be integrated with CMP’s DBaaS in order to provide and end-to-end automation solution. DBaaS exposes REST APIs that we could then call using the automation tool of choice. Database Administrators, end users or 3rd party tools can then use these features to patch Oracle databases.

DBaaS Automation Diagram

Do you want to learn more about this and even be able to try it? We’ve created an Oracle LiveLabs that cover’s all this functionality. This lab will guide you through the request of a PDB, setup DBaaS configuration, setup Fleet Maintenance and finally patch the PDB.

Follow the link below for the Oracle LiveLabs workshop.



If you are planning on attending to Oracle Cloud World this year and you want to learn more about this consider attending my session.

LRN3519: Deploy and Manage Oracle Databases with Ansible, Oracle Enterprise Manager 

See you in Vegas!!!

Thanks,
Alfredo

Oracle Enterprise Manager 13.5 RU7 Now Available!

Before I start this post I want to give a big shoutout to FeedSpot for ranking my blog in the Top 100 Best Oracle Blogs and Websites. It’s honestly an honor. Since the start of this blog the main goal was and still is to share knowledge with the community.

Being an Oracle ACE Alumni and able to share all this information with you is a privilege.

If you want to take a look at the list click below.



Going back to this post. Last week RU7 for Oracle Enterprise Manager (EM) was released. In this post I’ll share information on new features that may want you think on applying this RU as soon as possible.

Oracle EM has a monthly release update model. With every RU there are bug fixes and new features released in the product.

Some of the features that I’m eager to use are:

  • Dynamic Runbooks. They allow you to add your SOP’s to EM for consistent incident resolution.
  • Migration Workbench TTS support. Migration Workbench now supports the option to control the incremental backups and the final migration phase.
  • Fleet Maintenance scheduling enhancement. Now EM allows you to better control the Fleet Maintenance scheduling steps.
  • New STIG standard version. STIG version 2 Release 3 is now available in RU7.

For additional information on this release update go to below Oracle blog:



The patching procedure is very straight forward and will cover it very soon in the next post.

In the meantime I strongly recommend to review the latest Oracle EM documentation and get familiar with the process.

Happy patching,
Alfredo

Improve Oracle Database Security With Enterprise Manager 13c

IT Security is popular topic nowadays! We constantly hear news about data breaches, ransomware, malware, unauthorized access to IT systems, etc. IT organizations are constantly looking to keep their systems, networks and data safe and secure.

Today’s blog is about how Oracle Enterprise Manager (EM) can help Database Administrators to secure and harden the Oracle Databases they manage along with the hosts those databases are running on.

First things first. I strongly recommend to review the Oracle Database 19c Security Guide. This guide provides guidelines on how to secure the Oracle Database, harden the DB access, secure and encrypt the DB data and so.



Now let’s discuss some areas that database administrators should also look at in order to improve their security posture:

  • Timely apply security patches
  • Monitor database configuration and detect misconfigurations
  • Use industry and regulatory standards like STIG and CIS for the Oracle Database

All the features that we will be discussing today are part of the Oracle Database Lifecycle Management pack. This pack requires an additional license.

Timely apply security patches

Fleet Maintenance (FM) enables administrators to automate lifecycle activities such as database patching and upgrades. FM is a gold image subscription based model that allows to patch databases with minimum downtime by using out-of-place patching mechanisms. In-place patching is also available if you need to apply an emergency on-off patch.

Administrators have the ability to customize the patching process by adding custom pre/post scripts to patching operations. FM supports single instance, RAC databases, Grid Infrastructure, Multitenant and Data Guard configurations.

One thing to mention is the ability to get security patch recommendations as soon as they are published. EM connects to My Oracle Support (MOS) and checks for the availability of new security patches. As soon as a new security patch is released EM will let you know if your DB estate is compliant or not in terms of these patches.



Monitor database configuration and detect misconfigurations

Configuration and Drift Management helps you monitor the configuration of your DB estate, the hosts on where those DB’s are running as well as the Oracle Homes (OH) for those installations. EM allows you to create your own configuration templates based on the configuration settings you need to enforce. Any misconfiguration or drift away of your template will be automatically reported via the Drift Management dashboard and you can also receive alerts if you choose to.

Corrective Actions (CA) can also be created to automatically fix this misconfigurations in order to comply with the templates and reduce security risks.

How many times administrators issued an ALTER SYSTEM command with SPFILE scope and forgot about it? Well, you will know next time you bring your DB up after maintenance. EM helps you detect these changes before they become a production issue. It also help you track the history of configuration changes, save configuration information at a given time and also allows you to use this configuration information to be compared between targets.

Have you wonder, how many OH’s we have with this specific one-off patch?

How many DB’s we have running on this specific OS version?

Well, EM can help you answer all these questions using this configuration data.

One thing worth mentioning is that EM comes with hundreds on configuration collections. If you need to gather a very specific configuration that is not available out-of-the-box, you can create your own configuration extension and collect this automatically.



Use industry and regulatory standards like STIG and CIS for the Oracle Database

EM provides compliance standards to help customers meet regulatory standards like STIG and CIS. Oracle’s best practices are also included within the compliance framework. There are two available options for analysis.

  • Rule based analysis
  • Real-time change

Each option allow administrators understand where attentions needs to be put in order to harden the DB estate.

Using the compliance framework, EM will provide a score to each associated target along with all the violations that need to be remediated after each evaluation.



I also want to provide links to Oracle LiveLabs workshops available that cover the features discussed above.

Thanks,
Alfredo