Oracle database health checks

Checking for any down data files:

select file_name, tablespace_name, status, bytes/1000000 as mb, maxbytes/1000000 as mb_max 
from dba_data_files 
where status<>'AVAILABLE';

Checking for any down control files:

select name, status
from v$controlfile 
where status is not null;

Checking for any down tablespaces:

select tablespace_name, status, contents, extent_management
from dba_tablespaces
where status<>'ONLINE';

Checking to see if a tablespace is getting close to max size (less than 10% free):

select total.name as tblspace,
  nvl(free_space, 0) as free,
  nvl(total_space-free_space, 0) as used,
  total_space,
  round(nvl(free_space,0)/total_space,2)*100 as pct_free
from 
  (
    select tablespace_name, sum(bytes/1024/1024) as free_Space 
    from sys.dba_free_space 
    group by tablespace_name
  ) free, 
  (
    select b.name, sum(bytes/1024/1024) as total_space
    from sys.v_$datafile a, sys.v_$tablespace b 
    where a.ts# = b.ts#
    group by b.name
  ) total 
where free.tablespace_name(+) = total.name 
and total.name='TABLESPACE_NAME_HERE' 
and round(nvl(free_space, 0)/total_space,2)<=0.10;

Suspicious of someone stealing your system passwords and logging in remotely? Use this to check for system logins from machines you do not expect.

select username, osuser, program, machine 
from sys.v_$session 
where upper(machine) in('LIST_VALID_MACHINES_HERE','DBA_PC','BOSS_PC') 
and upper(username) in('SYS','SYSTEM');

Identify database locks

This select statement can be run by someone with DBA privileges (as it makes use of v$ views) to list who is locking what.

select a.session_id,
  c.serial#,
  a.oracle_username,
  a.os_user_name,
  c.machine,
  a.locked_mode,
  b.object_name,
  b.object_type
from v$locked_object a,dba_objects b, v$session c
where a.object_id = b.object_id
  and a.session_id = c.sid;

The query below is more specific. With it, you will be able to identify exactly which session is block another session, if any such conditions exist.

select s1.username || '@' || s1.machine || ' ( SID=' || s1.sid || ' ) is blocking ' || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' as blocking
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.block=1 and l2.request > 0
and l1.id1 = l2.id1 and l2.id2 = l2.id2;

If necessary, we can use the following example to kill a particular session. It may be useful if the locking session is actually from a hanging process, for example, though typically we should have the user release his/her lock gracefully.

alter system kill session '123,5555' immediate;

Note that the “123” above should be replaced by the session ID, and the “5555” should be replaced by the serial #. Those two values are shown in the first two fields in the select SQL statement above.

Oracle user management

Create a new user:

create user bob
identified by str0nggpAsswd
default tablespace tblspace1
temporary tablespace temp;

Change user password:

alter user bob
identified by n3wPaswrd0;

Unlock a locked user account:

alter user bob
account unlock;

Assign privileges to user:

-- Allow user to connect to the database
grant create session to bob;

-- Allow user to create various objects;
-- usually reserved to DBAs and developers.
grant create table to bob;
grant create view to bob;
grant create sequence to bob;

-- Grant/revoke various table/view rights to objects to a user.
grant select on address_table to bob;
grant insert, update, delete on address_table to bob;
grant all on address_table to bob;

revoke delete on address_table from bob;
revoke all on address_table from bob;

-- Grant/revoke execute rights on functions/procedures for a user.
grant execute on sp_validate_address to bob;
revoke execute on sp_validate_address from bob;

Lock/unlock a user:

alter user bob account lock;
alter user bob account unlock;

Remove/delete a user:

drop user bob cascade;

List Oracle users:

-- List all users
select username, user_id, account_status, default_tablespace, temporary_tablespace, profile 
from dba_users;

-- List active users only
select username, user_id, account_status, default_tablespace, temporary_tablespace, profile 
from dba_users
where account_status='OPEN';

Create an auto-increment field in Oracle

If we do not already have a table to work with, we will create a sample table for this purpose.

create table schema.test_table (
	row_id		number(5),
	name		varchar2(50),
	status		varchar2(1),
	entry_date	date
);

Let us say we want to make the “row_id” field an “auto-increment” field. To do so, we need a sequence object first. The code below gives us an sequence that starts off at 1 and capped at 99999.

create sequence schema.seq_test_table_row_id
minvalue 1
maxvalue 99999
start with 1
increment by 1
nocache;

All we have left is the actual “auto-increment” part. To do this, we build a very simple trigger on the table.

create trigger schema.trg_test_table_row_id
before insert on schema.test_table
for each row
begin
	if (:NEW.row_id) is null then
		select schema.seq_test_table_row_id.nextval into :NEW.row_id from dual;
	end if;
end;

Now, when you insert into the table schema.test_table, if you wish to use the sequence object as an auto-incrementer, just do not enter the row_id field. Two examples of using this auto-incrementer below:

insert into schema.test_table (name, status, entry_date) values('John Doe','1',sysdate);
insert into schema.test_table values(null,'John Doe','1',sysdate);

What if we want to override this trigger and use a number of our choice for the row_id field? No problem, just insert the value as you would normally. Note the trigger has an if-clause in it; the trigger will only have an effect (ie. auto-increment from the sequence) if the incoming row_id field is not populated.

Cleaning up the listener log

Since we noticed the listener log has grown out of control, we should already know the location of it. Just in case you need it, by default that log file is located at “NETWORK/log/listener.log” in your Oracle home.

The listener log cannot be modified while the listener is active, and more than likely, we need to minimize listener down time so we interrupt users the least. To do so, we are going to create a batch file so that the three commands we are going to run will run successively in a short amount of time. Alternatively, you can also perform them during business off-hours, if that is available to you. Either running in a batch file or running them manually, the commands are the same.

lsnrctl set log_status off
rename listener.log listener.old
lsnrctl set log_status on

The first commands is used to disable the listener, which will release the file lock on the listener.log file so we can work with it. The second command renames the file. Finally, the third command restarts the listener to resume normal operations. Immediately after the third command runs, a new listener.log file will be created with 0 size.

What to do with the old listener log “listener.old” is up to each of us. Personally, I tend to zip up the file and store it at a safe place, just in case I ever need to review it.

Using Oracle Data Pump

To use Data Pump, a directory must be set up. In the SQL*Plus code example below, we are creating a directory dedicated for Data Pump usage.

create or replace directory datapumpdir as ‘d:datadump’;

With the Data Pump directory set, we can now run the following command line to export data. There are two examples below; the first is a full database export (ie. all schemas), and the second exports only one schema.

expdp username/password@tnsname directory=datapumpdir dumpfile=full%U.dmp filesize=2G logfile=full.log full=y

expdp username/password@tnsname directory=datadumpdir dumpfile=schema%U.dmp filesize=2G logfile=schema.log schemas=scott

Note the %U variable in the dump file name. Because we set the maximum file size at 2gb, we introduced a potential error point: what happens if our dump file exceeds 2gb? Because we set a %U variable at the end of the file name, it means our first dump file will be tagged with a number “01”, eg. full01.dmp. If full01.dmp exceeds 2gb in size, the file full02.dmp will be created automatically. full03.dmp will be created if full02.dmp reaches 2gb in size, and so on.

To import, we need to first make sure the receiving database has the directory set, and then we can run the command:

impdp username/password@tnsname schemas=scott directory=datapumpdir dumpfile=full%U.dmp logfile=full.log

Below is the help file for the expdp command, for our reference.

Export: Release 10.1.0.4.0 - Production on Monday, 24 March, 2008 11:22

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

   Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to unload where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dumpfiles and logfiles.
DUMPFILE              List of destination dump files (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
TABLES                Identifies a list of tables to export - one schema only.
TABLESPACES           Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description
------------------------------------------------------------------------------
ADD_FILE              Add dumpfile to dumpfile set.
                      ADD_FILE=dumpfile-name
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=.
START_JOB             Start/resume current job.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS=[interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

Below is the help text for impdp command.

Import: Release 10.1.0.4.0 - Production on Monday, 24 March, 2008 11:26

Copyright (c) 2003, Oracle.  All rights reserved.


The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:

     Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
CONTENT               Specifies data to load where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dump, log, and sql files.
DUMPFILE              List of dumpfiles to import from (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Import everything from source (Y).
HELP                  Display help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of import job to create.
LOGFILE               Log file name (import.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile.
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to import a subset of a table.
REMAP_DATAFILE        Redefine datafile references in all DDL statements.
REMAP_SCHEMA          Objects from one schema are loaded into another schema.
REMAP_TABLESPACE      Tablespace object are remapped to another tablespace.
REUSE_DATAFILES       Tablespace will be initialized if it already exists (N).
SCHEMAS               List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE               Write all the SQL DDL to a specified file.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION   Action to take if imported object already exists.
                      Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES                Identifies a list of tables to import.
TABLESPACES           Identifies a list of tablespaces to import.
TRANSFORM             Metadata transform to apply (Y/N) to specific objects.
                      Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
                      ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES   List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
                      Only valid in NETWORK_LINK mode import operations.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.
                      Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description (Default)
------------------------------------------------------------------------------
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=.
START_JOB             Start/resume current job.
                      START_JOB=SKIP_CURRENT will start the job after skipping
                      any action which was in progress when job was stopped.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS=[interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

TNS Listener service missing

When installing a new Oracle database, by default a new database is installed, but that step is sometimes bypassed because you might not need it. When this initial database is not created, sometimes the install would fail to create the TNS Listener. To remedy this problem, all you have to do is run this command:

lsnrctl start

Before you run this command, you should take a look at the tnsnames.ora, listener.ora, and sqlnet.ora files to make sure they are properly configured.