Category Archives: Oracle Scripts and Commands

How To Purge E-Mail Notifications From The Workflow Queue So The E-Mail Is Not Sent

  1. Verify the current status of each notifications found in the WF_NOTIFICATIONS table that has potential for being sent when the Java Mailer gets started.

SQL> select notification_id, recipient_role, message_type, message_name, status, mail_status
     from wf_notifications
     where status in (‘OPEN’, ‘CANCELED’)
     And Mail_Status In (‘MAIL’, ‘INVALID’)
     order by notification_id;

Normally, only records where status = ‘OPEN’ and mail_status = ‘MAIL’ are notifications that would be sent, but there are programs that also can retry Canceled or Invalid notifications, so we included these as well.
This query should show which notifications are waiting to be e-mailed.

2) Use BEGIN_DATE in the where clause to help narrow down the emails not to get sent by the Mailer from a specific date range.

For example :

SQL> select notification_id, begin_date, recipient_role, message_type, message_name, status, mail_status
     from wf_notifications
     Where Status In (‘OPEN’, ‘CANCELED’)
     And Mail_Status In (‘MAIL’, ‘INVALID’)
     and begin_date < sysdate-30              — List only emails older than 30 days ago
     order by notification_id;

 3) To update a notification so that it will not get e-mailed, simply set the MAIL_STATUS = ‘SENT’, and rebuild the Mailer queue using wfntfqup.sql
The mailer will think the e-mail has already been sent and it will not send it again.
Note : Users can still reply to all these notifications from the worklist page in the applications.

Example:

SQL> update WF_NOTIFICATIONS set mail_status = ‘SENT’ where mail_status in (‘MAIL’,’INVALID’)
     and Status In (‘OPEN’, ‘CANCELED’);

(Remember to include any other filters you want like begin_date < sysdate-30)

This will update all notifications waiting to be sent by the mailer to SENT, and therefore will not get emailed when the Mailer is restarted.

4) Run the script $FND_TOP/patch/115/sql/wfntfqup.sql to purge the WF_NOTIFICATION_OUT queue and rebuild it with data currently in the WF_NOTIFICATIONS table. This is what purges all notifications waiting in the queue to be sent.  It will then populate the queue with the current data in the wf_notifications table.
Since you have changed the mail_status = ‘SENT” it will not enqueue these messages again.. Only the ones where mail_status = ‘MAIL’ and status = ‘OPEN’ will be placed in the WF_NOTIFICATION_OUT queue and sent by the mailer. (or CANCELED and INVALID if certain concurrent reports are run)

Example:

$ sqlplus apps/apps@db @$FND_TOP/patch/115/sql/wfntfqup.sql apps apps applsys


5) Now start the Workflow Java Mailer.

Reference : MOS Doc ID 372933.1

Advertisements

Auto Update Excel Sheet For DBAs

A Oracle DBA’s job always involves daily health checkups of the database and application and preparing a report of the same and sending it to your supervisors.

I tried experimenting something on the same and it worked out. Found it interesting so sharing it with you. Here I am checking server mountpoint size and updating the same automatically in excel. The same can be replicated for any kind of SQL statments.

Below are the steps to Automatically update excel file from txt (spool) files.

1. To generate spool file ::

script -q -c ‘df -h’ db_space.txt (Works on Linux, kindly check for other OS)

2. move spool file to local machine

3. Open Excel file:
Setup Import::
1. Click On “Data” menu option
2. Click “from text”
3. Browse and select the above spool file.
4. Choose Delimited button Click on Next.
5. Select space as delimiter , Click Next then Click Finish
6. Click Properties, Tick Refresh data when opening file, untick adjust column width, tick preserve cel formatting and select overwrite existing cells with new data, clear unused cells and then Click OK, OK.
7. Format the table as required. And Save the file.

Now every time you replace spool file with latest values Excel sheet automatically gets updated when you open it.

How to check from backend if frontend Logging is Enabled

You may at times face an issue where you have a rapidly growing udump directory. One of the possible reason for that can be if debug logging is enabled from the front end at user/site level.

Apart from manually checking from Forms interface following SQL queries will help in listing the same:

1. This will display values of profile options “FND: Debug Log Enabled” (AFLOG_ENABLED) and”Initialization SQL Statement – Custom'” (FND_INIT_SQL) at each level. If the values are respectively ‘Y’ and/or contains sql statement for these profile options then log and/or trace are enable for that level (which could be at site, user, … level):

set pagesize 200 linesize 200
col NAME for a25
col LEV for a6
col CONTEXT for a25
col VALUE for a50
col USER_PROFILE_OPTION_NAME for a37
select po.profile_option_name “NAME”,
po.USER_PROFILE_OPTION_NAME,
decode(to_char(pov.level_id),
‘10001’, ‘SITE’,
‘10002’, ‘APP’,
‘10003’, ‘RESP’,
‘10005’, ‘SERVER’,
‘10006’, ‘ORG’,
‘10004’, ‘USER’, ‘???’) “LEV”,
decode(to_char(pov.level_id),
‘10001’, ”,
‘10002’, app.application_short_name,
‘10003’, rsp.responsibility_key,
‘10005’, svr.node_name,
‘10006’, org.name,
‘10004’, usr.user_name,
‘???’) “CONTEXT”,
pov.profile_option_value “VALUE”
from FND_PROFILE_OPTIONS_VL po,
FND_PROFILE_OPTION_VALUES pov,
fnd_user usr,
fnd_application app,
fnd_responsibility rsp,
fnd_nodes svr,
hr_operating_units org
where (po.profile_option_name like ‘%AFLOG_ENABLED%’ or po.profile_option_name like
‘%FND_INIT_SQL%’)
and pov.application_id = po.application_id
and pov.profile_option_id = po.profile_option_id
and usr.user_id (+) = pov.level_value
and rsp.application_id (+) = pov.level_value_application_id
and rsp.responsibility_id (+) = pov.level_value
and app.application_id (+) = pov.level_value
and svr.node_id (+) = pov.level_value
and org.organization_id (+) = pov.level_value
order by “NAME”, pov.level_id, “VALUE”;

2. This will display all profile options with values and levels where the word TRACE, LOG, or DEBUG is found in the PROFILE_OPTION_NAME:
set pagesize 200
col NAME for a25
col LEV for a6
col CONTEXT for a25
col VALUE for a50
col USER_PROFILE_OPTION_NAME for a37
select po.profile_option_name “NAME”,
po.USER_PROFILE_OPTION_NAME,
decode(to_char(pov.level_id),
‘10001’, ‘SITE’,
‘10002’, ‘APP’,
‘10003’, ‘RESP’,
‘10005’, ‘SERVER’,
‘10006’, ‘ORG’,
‘10004’, ‘USER’, ‘???’) “LEV”,
decode(to_char(pov.level_id),
‘10001’, ”,
‘10002’, app.application_short_name,
‘10003’, rsp.responsibility_key,
‘10005’, svr.node_name,
‘10006’, org.name,
‘10004’, usr.user_name,
‘???’) “CONTEXT”,
pov.profile_option_value “VALUE”
from FND_PROFILE_OPTIONS_VL po,
FND_PROFILE_OPTION_VALUES pov,
fnd_user usr,
fnd_application app,
fnd_responsibility rsp,
fnd_nodes svr,
hr_operating_units org
where (po.PROFILE_OPTION_NAME like ‘%TRACE%’ or po.PROFILE_OPTION_NAME like ‘%DEBUG%’ or po.PROFILE_OPTION_NAME like ‘%LOG%’)
and pov.application_id = po.application_id
and pov.profile_option_id = po.profile_option_id
and usr.user_id (+) = pov.level_value
and rsp.application_id (+) = pov.level_value_application_id
and rsp.responsibility_id (+) = pov.level_value
and app.application_id (+) = pov.level_value
and svr.node_id (+) = pov.level_value
and org.organization_id (+) = pov.level_value
order by “NAME”, pov.level_id, “VALUE”;

Oracle Apps Purge Log/Out Commands

Following command will help you in regular purging of log and out files in Oracle EBS R12

1. Report cache logs: Retention period 5days
/usr/bin/find $LOG_HOME/ora/10.1.2/reports/cache/ -mtime +5 -exec rm  {} \;

2. Apache logs: Retention period 7days
/usr/bin/find $LOG_HOME/ora/10.1.3/Apache/ -mtime +7 -exec rm  {} \;

3. Concurrent manager log files : Retention period 30days
/usr/bin/find $LOG_HOME/appl/conc/log/ -mtime +30 -exec rm  {} \;

4. Concurrent manager out files : Retention period 30days
/usr/bin/find $LOG_HOME/appl/conc/out/ -mtime +30 -exec rm  {} \;

5. Appltmp logs: Retention period 30days
/usr/bin/find $APPLTMP/ -mtime +30 -exec rm  {} \;

6. Opmn logs: Retention period 7days
/usr/bin/find $LOG_HOME/ora/10.1.3/opmn/ -mtime +7 -exec rm  {} \;

 

You can also run the below concurrent request from Sysadmin responsibility:

Purge Concurrent Request and/or Manager Data Program