Introducing Personal Oracle Database User Manager

Personal Oracle Database User Manager is a self-service application that allows users to reset their own passwords and unlock their own accounts. The application assumes that a user’s LDAP username matches their Oracle Database username, so it is able to verify the user’s identity by authenticating against LDAP. If authentication is successful, the application displays a list of database accounts the user can manage.

Detailed installation and configuration instructions are available on our Bitbucket Git repository site: https://bitbucket.org/shadandy/podbum

Personal Oracle Database User Manager is free to install and use.

If you do install and use PODBUM, please let us know how it is working for you!

Sample Screenshots

Login Page
Home Page
Password Reset
Account Unlock

Ansible for Oracle Database Clones

Ansible LogoA frequent job requirement of all Oracle DBA’s is cloning a database or entire application for development or testing purposes. In previous years, many DBA’s performed manual clones by cutting and pasting commands they stored in text documents. Craftier DBA’s learned that their time was better spent automating the task with shell scripts. While better than manual cloning, shell scripts were usually not consistent or did not handle errors very well. These days, there are several configuration and orchestration tools that can be leveraged to perform environment clones or refreshes. One such tool is Ansible®, and using Ansible for Oracle Database clones is very intuitive for the typical DBA or sysadmin mindset.

Shadandy has been using Ansible to clone databases for years. It can be used to clone standalone on-premises databases as well as Amazon RDS for Oracle instances. It can also clone entire Oracle E-Business Suite environments spanning multiple nodes.

In one instance, Shadandy was able to help a customer who was having problems with the DBA resources provided by one of the huge offshore/outsourcing companies. We will not say which one, but our experience with dealing with any of the offshore companies has been similar. The offshore DBA’s were manually executing clone steps that had been spoon fed to them, but the environment refreshes were never performed consistently. Why automate when you can bill more hours to sit there and cut-n-paste? Then get paid to do it again when things were not done correctly the first time.

Our solution was to remove the offshore DBA’s from the equation by automating all the steps they were cutting and pasting into a single, cohesive Ansible playbook. Do the playbooks require upkeep and maintenance? Yes. Does maintaining the Ansible playbooks require as much time as watching a database clone for hours on end? No. Are environment refreshes performed in a more timely and consistent manner? Yes. Can they be scheduled to execute automatically? Yes. What’s not to like, unless you are an offshore company getting paid for time a butt is in a seat?

Our typical Ansible automation framework starts with a host inventory file for each environment. A separate inventory file for each environment avoids mistakenly running a command against all the environments at the same time. An example inventory file for an OADEV1 EBS environment is something like this:

[oa_all_web]
web1.yourdomain.net
web2.yourdomain.net

[oa_all_ext]
ext1.yourdomain.net
ext2.yourdomain.net

[oa_all_ccm]
ccm1.yourdomain.net
ccm2.yourdomain.net

[oa_all_db]
oadev1db.yourdomain.net

[oa_all:children]
oa_all_ext
oa_all_web
oa_all_ccm
oa_all_db

[oadev_all:children]
oa_all

[oadev1:children]
oa_all

The inventory allows the Ansible playbook to perform different operations for internal web nodes, external web nodes, concurrent manager nodes, and the database server. If your environment contains other 3rd party applications such as Kofax MarkView, the inventory can be crafted to handle them.

Additional group configuration files drive what Ansible does as it connects to each node. For an OADEV1 environment, the Ansible playbook reads variables from the following files:

group_vars/oadev1
group_vars/oadev_all
group_vars/oa_all/custom_top_dirs
group_vars/oa_all/dba_directories
group_vars/oa_all/oa_all_default
group_vars/oa_all/oa_all_vault
group_vars/oa_all/stage_dirs

Every environment will be a little different, but you can see the flexibility of storing configurations for each environment. An added perk is that Ansible provides the ansible-vault utility to encrypt sensitive files.

While we are not going to provide the exact playbook code, an EBS environment refresh can be broken down into individual playbooks and/or tasks. Here is an example ebs_autoconfig_all.yml playbook that calls a task to run Autoconfig on all application nodes:

---
- include: ebs_autoconfig_ext.yml
- include: ebs_autoconfig_web.yml
- include: ebs_autoconfig_ccm.yml

Breaking down the playbook above, the ebs_autoconfig_web.yml playbook runs Autoconfig on all internal web nodes:

---
- name: Playbook used to run Autoconfig on the internal web nodes of an environment.
hosts: "oa_all_web"
any_errors_fatal: true
user: "{{ appl_os_user }}"
vars:
apps_pass: "{{vault_ebs_environment[env_name|lower]['database']['credentials']['apps']}}"
tasks:
- include: tasks/ebs_app_run_autoconfig.yml

The actual tasks/ebs_app_run_autoconfig.yml Autoconfig task is:

---
- name: Run Autoconfig.
shell: '. ~/.profile; . {{ appl_shared}}/apps/apps_st/appl/APPS{{ env_name }}_{{ ansible_hostname }}.env;perl ${AD_TOP}/bin/adconfig.pl contextfile=${CONTEXT_FILE} appspass={{ apps_pass }} -parallel'
no_log: True

To execute the Autoconfig playbook, a DBA can simply execute a single command:

ansible-playbook -i hosts.oadev1 ebs_autoconfig_all.yml

Similar tasks and playbooks can be executed for SQL scripts, RMAN duplicates, SAN snapshots, and all other operations required to perform a database or application clone in your environment. Leveraging Ansible for automation, Shadandy was able to reduce a 10-node EBS environment refresh from one or more days (if done correclty the first time) to 3-4 hours. We even used Ansible to fully automate their database and EBS code deployments.

If your company requires database or EBS environment configuration or clone automation, please contact Shadandy to see how we can be of assistance.

How To Add dbForge Schema Compare and Data Compare Launcher Shortcuts to Oracle SQL Developer

I tested several Oracle database schema and data comparison tools before deciding to purchase the dbForge Compare Bundle for Oracle. One of the convenient options for the comparable Red Gate Deployment Suite for Oracle is a SQL Developer extension that adds toolbar buttons to launch the applications. After looking into the Oracle SQL Developer tools menu, I found that adding launchers for external applications is fairly easy.

I recorded a YouTube video showing the configuration steps necessary to add dbForge Data and Schema Compare launchers to the SQL Developer toolbar. Enjoy.

Oracle Database Schema and Data Comparison Tools Review

Oracle databases are complex enough without active development, testing, and patching taking place. In most environments where custom applications are being developed or out-of-the-box database code is being extended, the database code and data between database environments inevitably deviates or is corrupted over time.

Normally, whenever an environment is in an unknown state, I say, “Refresh the environment from production.” Sometimes that reply is not acceptable to the development or project team because code may not have been checked into source control or the refresh might wipe out too many other things, pushing back the project timeline. More often than not, it’s because the developers do not know what they’ve done or how to get the system back to where they want it. 🙂

Challenges such as these led me down the path of investigating schema comparison tools. Several vendors and open source projects provide such tools, and I tried out a few of them using the following criteria:

  • Reasonable price (less than $400 USD).
  • Easy to use.
  • Compare as many database schema object types as possible.
  • Perform accurate comparisons.
  • Perform quick comparisons.
  • Command line interface in case automation requirements ever arise.

After defining the requirements, I searched online for available tools to test. If a demo or trial product was unavailable online or was more than $1000.00, I did not test it or include it in the list. From there, I products dropped out of the list as they failed to meet the inclusion criteria. In the end, the Red Gate Schema Compare for Oracle and dbForge Schema Compare for Oracle ended up being comparable. The accuracy and speed of the two finalists was measured by doing a comparison of the thousands of PL/SQL objects in an Oracle E-Business Suite database. The comparison took quite a while, but both products were able to do it. There were some discrepancies between the differences reported by each tool, but they were not material.

Product Name Price (USD) Ease of Use Object Types Accurate Fast Command Line
Altova DatabaseSpy $161.25 Fail
Datanamic SchemaDiff for Oracle $299.00 Fail
dbForge Schema Compare for Oracle $229.95 Pass Pass Pass Pass Pass
DbSchema Database ER Diagram Design Tool $127.00 Fail
DBSolo Schema Comparison Tool $129.00 Fail
EMS DB Comparer for Oracle $133.00 Fail
Red Gate Schema Compare for Oracle $495.00 Pass Pass Pass Pass Pass
TOra Free Pass Fail
TOYS Free Pass Fail

After coming down to the schema compare finalists, I also tested the data compare tools from Red Gate and Devart. Both data comparison tools were comparable in my testing.

In the end, the choice came down to pricing between Red Gate and Devart. I contacted Red Gate to see if they could be competitive with the Devart dbForge pricing, but they were not able to match the price. I am guessing that Red Gate has a bigger marketing department and more overhead due to being in the UK while Devart does not seem to have as big of a web presence and is in the Ukraine.

Both tools get the job done, but I opted for the dbForge Compare Bundle for Oracle costing $289.95 USD.

If you have a different opinion about any of the products, please let me know. I rapidly tested and dropped several of the products off of the list, because they were not intuitive to me or required too much setup for a quick test, but it could easily have been my own impatience or ignorance. 🙂

Rotate and Purge Oracle Log Files

Every running Oracle installation has several directories and files that need to be rotated and/or purged. Surprisingly, or not, Oracle has not included this basic maintenance in their software. I have come across the oraclean utility in the past, but the script does not do everything I need.

To achieve what I required, I recently hacked together a single script that does the following things:

  • Cleans audit_dump_dest.
  • Cleans background_dump_dest.
  • Cleans core_dump_dest.
  • Cleans user_dump_dest.
  • Cleans Oracle Clusterware log files.
  • Rotates and purges alert log files.
  • Rotates and purges listener log files.

The script has been tested on Solaris 9 and 10 with Oracle database versions 9i and 10g. It has also been tested with Oracle Clusterware and ASM 11g. The script can be scheduled on each server having one or more Oracle homes installed, and it will clean all of them up using the retention policy specified. The limitation is that log file retention is specified per server, not per instance. However, I find that placing a single crontab entry on each database server is easier than setting up separate log purge processes for each one.

The script finds all unique Oracle Homes listed in the oratab file and retrieves the list of running Oracle instances and listeners. Once the script knows that information, it rotates and cleans the trace, dump, and log files.

Download: cleanhouse.sh

Usage: [bash]cleanhouse.sh -d DAYS [-a DAYS] [-b DAYS] [-c DAYS] [-n DAYS] [-r DAYS] [-u DAYS] [-t] [-h]
-d = Mandatory default number of days to keep log files that are not explicitly passed as parameters.
-a = Optional number of days to keep audit logs.
-b = Optional number of days to keep background dumps.
-c = Optional number of days to keep core dumps.
-n = Optional number of days to keep network log files.
-r = Optional number of days to keep clusterware log files.
-u = Optional number of days to keep user dumps.
-h = Optional help mode.
-t = Optional test mode. Does not delete any files.[/bash]

How To Download Patches From Oracle E-Delivery

Have you ever wanted to automatically download all of the patches on an Oracle E-Delivery page? We have, and we thought this script might be useful to others.

Script: get_edelivery.sh

Prerequisites:

  • Firefox Browser on your desktop
  • Export Domain Cookies Firefox Plugin

Instructions:

  • Visit Oracle E-Delivery.
  • Select the software you would like to download and navigate to the patch download page.
  • Note the “egroup_aru_number” from the URL.
    • For example, the number is 11249872 for URL below: [text]http://edelivery.oracle.com/EPD/Download/get_form?egroup_aru_number=11249872.[/text]
  • Export your browser cookies using the “Export Domain Cookies” Firefox Plugin.
  • Copy your cookies file to the server.
  • Run the download script from the directory where you are going to download the patches.
    • [bash]get_edelivery.sh 11249872 cookies.txt[/bash]

The script will download all of the patches available via “Download” buttons on the E-Delivery page. At the very end, it will perform a checksum on each file. If a file fails, you can delete the corrupt file and run the script again.

How To Move Oracle Online Redo Log Files

Issue


You have online redo log files in the wrong place and want to move them.

Solution


  1. Run the following SQL to locate the current redo log groups and their members:
  2. [sql]select l.group# group_number<br />
    , l.status group_status<br />
    , f.member group_member<br />
    , f.status file_status<br />
    from v$log l<br />
    , v$logfile f<br />
    where l.group# = f.group#<br />
    order by l.group#, f.member;[/sql]

  3. Find a location that can contain the new log files.
  4. Create the new log files using the following SQL. Note that there must be at least two logfile groups after dropping unnecessary log files.
  5. [sql]sqlplus /nolog<br />
    connect / as sysdba<br />
    alter database add logfile (‘/path/to/new/redo_01_01.log’,’/path/to/new/redo_01_02.log’;) size 150M;<br />
    alter database add logfile (‘/path/to/new/redo_02_01.log’,’/path/to/new/redo_02_02.log’;) size 150M;[/sql]

  6. Run the SQL from Step 1 to determine which logfile is CURRENT.
  7. Run the following command to switch log files. After running the SWITCH command, run the SQL from Step 1 again. Repeat the process until one of the newly created logfile groups has a CURRENT status.
  8. [sql]alter system switch logfile;[/sql]

  9. After one of the newly created log files is CURRENT, run the following SQL. You should only drop logfile groups that have an INACTIVE status.
  10. [sql]alter database drop logfile group GROUP#;[/sql]

  11. Repeat Step 6 for each undesired logfile group.
  12. Delete the old redo log files from the file system.
  13. References


    Oracle® Database Administrator’s Guide – Managing the Redo Log