Monday, September 22, 2014

Gather Statistics Script

The following script will backup the current statistics and gather a new ones on a schema or table.
To download:
https://www.dropbox.com/s/fku4hf082xah1bm/gather_stats.sh?dl=0

This script is part of the database administration bundle, to know more about this bundle please visit:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

How it works:
Once you execute the script it will ask you to select the database you want to run against (in case you have multiple databases on the same server):
Select the Instance You Want To Run this script against:
-------------------------------------------------------------------
1) orcl1
2) orcl2
3) salesdb
#? 

Next, It will ask you to provide the SCHEMA NAME in case you want to gather statistics on a whole schema or TABLE OWNER in case you just want to gather statistics on a single table:
Enter the SCHEMA NAME/TABLE OWNER:
==================================
sysadm

Next selection will ask you to provide the TABLE NAME, in case you make your mind to gather statistics on the whole schema just leave it BLANK and hit enter, or just provide the table name you want to gather its statistics:
Enter the TABLE NAME: [BLANK VALUE MEANS GATHER THE WHOLE SCHEMA [sysadm] STATISTICS]
===================
statuslog

In case you provide a TABLE NAME it will check if the table is already exist or not then will display the LAST STATISTICS DATE on that table and will ask you if you want to gather histograms along with statistics or not [Default is YES].
GATHER HISTOGRAMS ALONG WITH STATISTICS? [Y|N] [Y]
=========================================

Next question will ask you whether you want to gather statistics on the table's indexes or not [Default is YES]:
GATHER STATISTICS ON ALL TABLES'S INDEXES? [Y|N] [Y]
=========================================

Now the script will start to do the following:
> It will backup the current statistics on the provided SCHEMA/TABLE to a statistics table called BACKUP_STATS under SYS schema.
> It will gather statistics on the provided SCHEMA/TABLE as per your previous selections.
> Once it finish, it will provide you with the SQL command you need to run to restore the original statistics in case you are not OK with the new statistics.

You may ask a question, why you didn't include gather database option in that script?
Actually I'm not in favor of the approach of gathering statistics on the whole database, unless you are doing so for the purpose of gather statistics on the tables having stale statistics, starting with Oracle 10g Oracle introducing an auto-tune job to gather statistics on tables that have missing or stale statistics:
You can check the status of that automated job using:
SQL> select status from dba_autotask_client where client_name = 'auto optimizer stats collection';

In case it is in DISABLE status, you can Enable it through this command:
SQL> BEGIN
     DBMS_AUTO_TASK_ADMIN.ENABLE(
     client_name => 'auto optimizer stats collection', 
     operation => NULL, 
     window_name => NULL);
     END;
     /

For more information about Statistics in Oracle you can read this post:
http://dba-tips.blogspot.ae/2012/11/all-about-statistics-in-oracle.html


Wrapping up:

To download gather statistics shell script click on the following link:
https://www.dropbox.com/s/fku4hf082xah1bm/gather_stats.sh?dl=0

In case you want to try using the whole database administration bundle, please click this link:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

Thursday, August 28, 2014

Recover Data from Damaged/Corrupted/Inconsistent/Offline/Truncated/Dropped Table While the Database is Offline

When it comes to catastrophic scenarios where the database has been corrupted and there is no available backup, flashback feature was not turned on, database cannot be open due to errors or inconsistent status, there is still a chance to recover and save the data as long as the datafiles are still exist on the filesystem, using third party tools that able to directly read the datafiles without the need to open the database.
In this post I'll cover one of these tools, Oracle Database Unloader (ODU).

Download ODU:
===============
http://www.oracleodu.com/en/download

Full version:  Restores all data only if you got the license file from the support [Not Free].

Trial version: No license required, but it comes with limitations, it doesn't work on ASM, it can only recovers data under system datafiles
                     but for other datafiles it will recover it partially [less than 1% of all rows].
Note: In this demonstration I've used the TRIAL version to restore the data, this is why I was restoring tables located under system tablespace only to be able to recover all data.

Install ODU:
===========
Upload the downloaded file to the server and extract it:

# gunzip odu_433_linux_x86.tar.gz 
# tar xvf odu_433_linux_x86.tar 
# cd odu
# chown u+x odu

Prepare ODU:
=============
Before start using ODU for recovering data, first you have to modify the configuration files to point to the right paths of your datafiles.

In case the datafiles stored on ASM modify asmdisk.txt by adding ASM disks path to asmdisk.txt file:

# vi asmdisk.txt
0 /oradata/asm/disk1.dbf 
0 /oradata/asm/disk2.dbf 
0 /oradata/asm/disk3.dbf 

Note: The first column should be filled with 0

In case the datafiles are not stored on ASM, add all datafiles path to control.txt file:
This SQL can help you generate the lines you will add to control.txt :
SQL> SELECT 0,0,0,NAME FROM V$DATAFILE ORDER BY FILE#;

# vi control.txt
0 0 0 /database/orcl/datafiles/system01.dbf
0 0 0 /database/orcl/datafiles/users01.dbf
0 0 0 /database/orcl/datafiles/sysaux01.dbf
0 0 0 /database/orcl/datafiles/undotbs01.dbf
0 0 0 /database/orcl/datafiles/example01.dbf

Note:  The first three columns should be filled with 0 and the first line must point to the system datafile.

The rest of ODU configuration files like config.txt, contains configurations like block_size which must be the same for the database you are extracting data from.
In config.txt the parameter output_format represent the fashion where the data will be saved after extracting (unload) it from tables. the default value is text where data will be saved in SQL LOADER format, the other value is dump where data will be saved in export dump format.

In case you are using the full version, arrange for the license by save the configuration in a file and send it to ODU support to send you back the license file:
Save the controls to oductl.txt:
# ./odu
ODU> save control 
ODU> exit 

Send the file oductl.txt to the ODU support mailbox in http://www.oracleodu.com/en/support, they will send you the license file named oductl.dat, copy it under ODU installation directory.

* Note that Trial version doesn't need a license but it's data restoration capability is limited as mentioned before.

Using ODU:
============

Explore the database:
>>>>>>>>>>>>>>>>>
# ./odu

ODU> load config [filename]  #Not mandatory, if file name ommited it will load config.txt file
ODU> open [filename]  #Not mandatory, if file name ommited it will load control.txt file

ODU> unload dict                 #Load dictionary data from SYSTEM tablespace.
ODU> list user                     #List database users & roles.
ODU> desc scott.emp #descripe a table plus provide extra data like object id, tablespace,..etc.

Restore data of a truncated table: [TEST SCENARIO]
>>>>>>>>>>>>>>>>>>>>>>>>>>
*control.txt file must be loaded with datafiles location as shown before.

SQL> CREATE TABLE SCOTT.TEST TABLESPACE USERS AS SELECT * FROM DBA_USERS;
SQL> SELECT COUNT(*) FROM SCOTT.TEST;
SQL> TRUNCATE TABLE SCOTT.TEST;
SQL> alter system checkpoint;
SQL> alter tablespace users offline; #important/not mandatory to offline the tablespace of truncated table to avoid space overwrite.
SQL> alter system checkpoint; #important to write changes to the disk or ODU may not find the table.

# ./odu

ODU> load config [filename]  #Not mandatory, if file name ommited it will load config.txt file
ODU> open [filename]  #Not mandatory, if file name ommited it will load control.txt file

ODU> unload dict                 #Load dictionary data from SYSTEM tablespace.
ODU> desc scott.test #Get information of truncated table location (check it's tablespace number: TS#=<> )
Storage(Obj#=55125 DataObj#=55126 TS#=4 File#=4 Block#=403 Cluster=0)

ODU> scan extent tablespace 4 #Scan the extents of the tablespace # of the truncated table.
ODU> unload table scott.test object truncate #Restore all rows of truncated table in a SQL LOADER file.

SQL> alter tablespace users online; #Online back the tablespace of truncated to load back truncated data.

Use SQL LOADER to load the data back to the truncated table:
# cd /data
# sqlldr scott/tiger control=SCOTT_TEST.ctl 

Note: In case you want to restore the full data of a table not located on system tablespace, you must use the full licensed version of ODU.

Restore a Dropped Table:
>>>>>>>>>>>>>>>>>>>>>>
Let's create and drop a table for the following testing scenario:

SQL> create table scott.bb tablespace system as select * from dba_synonyms;
SQL> drop table scott.bb;
SQL> alter system checkpoint;

Before using ODU the first step to do is to get the OBJECT ID for the dropped table using Log Miner feature in Oracle:
--Add redo/archive that contain the drop transaction:
SQL> EXEC sys.dbms_logmnr.add_logfile(logfilename=>'/u01/app/oracle/oradata/xty/redo03.log'); 
--Start mining the registered archives:
SQL> EXEC sys.dbms_logmnr.start_logmnr(options=>sys.dbms_logmnr.dict_from_online_catalog);    
--Get the execution time of the drop statement:
SQL> SELECT scn,timestamp,sql_redo from v$logmnr_contents where operation='DDL' and sql_redo like '%drop%' order by 2 ; 
--Use the previous execution time to get the internal delete statement of the dropped table from the dictionary, where the object id can be found:
SQL> SELECT scn,timestamp,sql_redo from v$logmnr_contents
     where timestamp=to_date('2014-05-21 10:33:33','yyyy-mm-dd hh24:mi:ss') and sql_redo like '%OBJ%' order by 1;
--End the Log Miner task:
SQL> EXEC sys.dbms_logmnr.end_logmnr; 

Now we suppose to have the object id of the dropped table.

In case you're just testing and don't want to go through this leghnthy process before dropping the table just get it's object id from ODU:
# ./odu
ODU> unload dict
ODU> desc scott.bb                                      #You'll find the object id in the output.

Start the table restoration job using ODU:
# ./odu
ODU> unload dict #Load dictionary data from SYSTEM tablespace.
ODU> scan extent tablespace 0 #scan extents of tablespace 0 (system)
ODU> unload object all tablespace 0 sample  #Provides you with the command you shall use to restore for EVERY object inside tablespace 0
ODU> unload user scott #Unload ALL tables owned by user SCOTT (each table will have 3 files located under data directory).
ODU> unload object 55150 sample  #Provides you with the command you shall use to restore, for SPECIFIC object inside tablespace 0.
ODU> unload object 55150 tablespace 0 column VARCHAR2 VARCHAR2 VARCHAR2 VARCHAR2 #Just copy & execute the command from the previous output.

Note: unload command will generate three files for each unloaded table ".txt",".sql",".ctl"

From OS side modify file /data/ODU_0000055150.sql> by changing the name of object from ODU_0000055150 to SCOTT.BB and execute it:
SQL> @/data/ODU_0000055150.sql>
Table created.

From OS modify file /data/ODU_0000055150.ctl> change the name of object from ODU_0000055150 to SCOTT.BB and start load the table with data using SQLLOADER:
# sqlldr scott/tiger control=ODU_0000055150.ctl


Export Data from an existing Table: [Database is down / Skipping Auditing/Permissions]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# ./odu
ODU> unload dict #Load dictionary data from SYSTEM tablespace.
ODU> desc scott.bb #Scan the extents of the tablespace number of the truncated table.
ODU> scan extent tablespace 0 #Scan the extents of the tablespace #.
ODU> unload table scott.bb object scanned #Export data of table SYS.AA into SQL LOADER file. [object scanned] is not mandatory.

Open the .sql file and change the table owner&name if you wish, do the same for .ctl file then use SQLLOADER to import the data.
# sqlldr scott/tiger control=SCOTT_AA.ctl


Conclusion:
ODU tool can help restore data in catastrophic scenarios where there is no available backup or the database has damaged or cannot be opened for any reason.
On the other hand this tool proves that database security can be easily compromised by anyone have access to database files and have a basic knowledge in using such tools, in order to avoid such breach the DBA should consider encrypting data on filesystem level using encryption features such as TDE, also security hardening and limiting the access on the filesystem level should be highly considered to prevent un-authorized users from accessing database files.

REFERENCES:
Full documentation: http://www.oracleodu.com/soft/ODUUserGuide_en.pdf
Real success story:     http://www.basecare.dk/6storage/983/7/recovering_lob-index_block_corruption.pdf

Sunday, July 27, 2014

All About ASM


In this post I'll try to provide a brief summary for a huge topic like Automatic Storage Management (ASM), hopefully the reader find it useful in understanding ASM.

-----------------
What is ASM:
-----------------
Automatic Storage Management (ASM) is a new feature in Oracle 10g onward provides a state of the art storage for Oracle database files.

-------------
Why ASM:
-------------
I'll illustrate "Why ASM?" through Pro's & Con's approach to show you the full image and let you easily judge ASM ...

ASM Pro's:
> Provides shared filesystem for RAC.
> Provides MIRRORING & STRIPPING in the same time for all data across all disks [inside the diskgroup], without any additional work by the DBA, [DBA will only add/remove disks then ASM will take care of re-balancing the data (less human errors)].

> Adding a new disk to the diskgroup will not only increase the space but will improve the throughput of that diskgroup.
> Unlike other vendors, hot spare disks are not required for mirroring, the thing provides more efficient use of the storage capacity.
> Mirroring level is into Allocation Unit size (AU is the smallest unit in ASM) (1MB by default), opposed to other vendors that do it at the disk level.
> Striping level is into Allocation Unit size (AU is the smallest unit in ASM) (1MB by default), opposed to other vendors that do it at the disk level, the thing provides better performance and I/O latency, for ASM. I/O is spread evenly across all available disk to maximize performance.
> When you add/remove disks from a disk group, ASM will automatically redistribute the file contents and eliminates the need for downtime.
> Raw device performance can improve I/O performance.
> Ability to Dynamically Add and Remove Disk/Space while the database is running.
> Mirroring / Triple mirroring equivalence & Stripping across different storage vendors.
> ASM can coexist with other storage types.
> ASM can work with any storage (SAN, NAS, SSD, Direct Attached Storage).
> Flexible Redundancy Options.
> ASM can automatically handle persistent naming for partitions.
> Works with Oracle Managed Files (OMF).
> ASM files cannot be accessed from OS, more secure and less human errors.
> The database can contain ASM and NON-ASM files.
> Safe, If a permanent disk IO error is incurred during an Oracle write operation, then the affected disk is removed from the diskgroup by ASM, thus preventing more application failures. If the loss of disk results in data loss, ASM will automatically dismount the diskgroup to protect the integrity of the diskgroup data. 

> If a read error occurs because of a corrupted block, ASM will automatically read the good copy of the extent from other mirrored disks and then copy it to the disk that had read errors. (one of the biggest ASM advantages)
> Free of cost, NO license required.

ASM Con's:
> Hardware RAID 1 mirroring [same storage type] avoid the overhead on the server unlike ASM mirroring.
> RMAN is the only way to backup the database (NO cold OS backups), datapump still can be used for data transfer.
> OS Utilities like "dd" command are not recommended for backing up/restoring ASM diskgroups.
> ASM files are not accessible from OS, you need to use ASMCMD built-in tool for that purpose which still doesn't have all file/directory management commands.
> Each time you upgrade your kernel, you must upgrade ASM packages along with it, as it is a kernel-version-specific.
> ASM have an instance that must start first before the DB instance start. [Allocate memory on the server]
> Require specific ASM home installation for 11gr1 backward, for 11gr2 grid infrastructure onward RAC & ASM installation are in the same place.
> Difficult to move files (e.g. archives, RMAN backup, dump files) between servers, as it needs to be extracted out of ASM first.

> I usually consider using ASM as an additional point of failure & complexity to the system due to its dependency on the ASM instance.

--------------
ASM Facts:
--------------
 > ASM FS will use an ASM instance running on the server to mount the diskgroups and make it available for databases.
> The MAXIMUM SIZE of ASM disk is 2TB [10g,11g], 32 petabytes (PB) [12c].

> The MINIMUM SIZE of ASM disk is 4Mb.
> ASM size limits: 63 disk groups | 10,000 ASM disks | 1 million files for each disk group.
> ASM Allocation Unit (AU) is from 1MB to 64MB, the larger the better for Data warehouse DB and vice versa for OLTP database the smaller the better.
> ASM disks inside the diskgroup can be varied sized (Not recommended).
> Recommended to have 4 disks [same size/performance] for each diskgroup.
> Oracle Clusterware 12c, no longer support the use of raw (block) storage devices.
> ASM instance name starts with "+".
> only one ASM instance is required per node regardless of the number of database instances on the node, and one ASM instance for each RAC node.
> ASM instance is similar to normal DB instance except that ASM instance highest mode is the MOUNT mode where it mounts all diskgroups.
> ASM instance restriction mode will prevent DB instances from connecting to ASM instance.   
> ASM instance doesn't have controlfiles like DB instance.
> ASM instance should start first before the DB instance to make the diskgroups available for the DB instance.
> ASM can be on a higher or lower version than it's client databases.
> A shutdown/crash of standalone [NON RAC] ASM instance will cause all client-database instances to shut down as well.
> Both ASM & DB instance should be on the same server, DB instance cannot connect remotely to ASM instance.
> Most of DB instance parameters are not valid for ASM instance, ASM instance should have parameter INSTANCE_TYPE=ASM.
> When upgrading ASM instance (similar to DB) there are NO persistent changes to disk groups as part of the upgrade.
> ASM’s SGA is 256M by default [11gr1 onwards] which more than enough for ASM, no need to adjust the SGA whatever the number of supported diskgroups or databases.
> ASM does not require that all disks have consistent names across servers, as it using ASM disk names.
> ASM disks inside diskgroup should have the same size, to avoid imbalances and should have the same speed, whereas I/O distribution is constrained by the slowest drive.
> Oracle ASM only discovers disk partitions. Oracle ASM does not discover partitions that include the partition table.
> Multiple databases can share the same diskgroup.


Redundancy in ASM: 
Data redundancy in ASM are three levels (High, Normal and External):

High     => Data will be mirrored 3 times & stripped on all disks[require at least 3 failure group]. Failure Group store mirror copies of data inside Disk Group.
Normal   => Data will be mirrored 2 times & stripped on all disks[require at least 2 failure group].
External => Data will not be mirrored nor stripped. [No failure group]

Note: It's recommended that each failure group disk should be on a different disk controller to avoid single point of failure.
Note: Extent distributions are coarse and fine: [Both are predefined and cannot be adjusted by the DBA]
COARSE: For datafiles, redologs [11gr2+] and archivelogs, each coarse grain extent file extent is mapped to a single allocation unit. 
FINE: For controlfiles, reduces latency by spreading the load more widely.

------------------------------
 ASM Implementation:
------------------------------

OS Pre-requisities:
================

>At least 1GB of free available memory on server
>At least 1.8GB of free swap space
>The following Linux packages should be installed:
    rpm -Uvh libaio-0
    rpm -Uvh libaio-devel-0.3.105-2.i386.rpm
    rpm -Uvh elfutils-libelf-0
    rpm -Uvh elfutils-libelf-devel-0.91.1-3.i386.rpm
    rpm -Uvh unixODBC-2.2.11-1.RHEL4.1.i386.rpm

>Oracle user
>Oracle dba group
>Ensure the following lines exist in the /etc/security/limits.conf
    * hard nproc 16384
    * hard nofile 65536

>Ensure the limits are configured to set for new sessions by ensuring the following are in the /etc/pam.d/system-auth file.
    session required pam_limits.so
    session required pam_unix.so

>Ensure the following lines exist in the .bash_profile for the oracle user
    ulimit -n 65536
    ulimit -u 16384

>Ensure oracle primary group dba
    $ usermod –g dba oracle
>set the following in the /etc/sysctl.conf
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 4294967295
    kernel.shmall = 268435456
    kernel.sem=250 32000 100 128
    net.ipv4.ip_local_port_range = 1024 65000 < 11gR1
    net.ipv4.ip_local_port_range = 9000 65500 >= 11gR2
    net.core.rmem_default=4194304
    net.core.wmem_default=262144
    net.core.rmem_max=4194304
    net.core.wmem_max=1048576
    fs.file-max = 6815744
    fs.aio-max-nr = 1048576

>As a best practice, ensure to install the latest version of ASM even if DB version is lower.

Disk preparation:  [All steps will run by ROOT user]
==============

Partition the disks with fdisk utility: Create RAW partitions:
---------------------------------
# fdisk /dev/sda
    n
    p
    1

    Specify the size of the partition e.g. +10000M or accept the default to allocate the whole disk
    p
    w


A new device with name sda1 will appear under /dev
Repeat the same fdisk steps for the other disks...

# partprobe     =>To start use the new partitions and avoid rebooting the system.

# fdisk -l
# chmod 660 /dev/sda
# chmod 660 /dev/sda1
# chown oracle:dba /dev/sda
# chown oracle:dba /dev/sda1

Add these lines (owner,permissions) to /etc/rc.d/rc.local
# vi /etc/rc.d/rc.local
chmod 660 /dev/sda
chmod 660 /dev/sda1
chown oracle:dba /dev/sda
chown oracle:dba /dev/sda1



    In case you are preparing a test machine [For Testing Purposes]: (Create files to act as dummy disks 100m size for each)
    --  ----------------------------------------------------
        # dd if=/dev/zero of=/u02/asmdisk1 bs=1024k count=100
        # dd if=/dev/zero of=/u02/asmdisk2 bs=1024k count=100
        # dd if=/dev/zero of=/u02/asmdisk3 bs=1024k count=100
        # dd if=/dev/zero of=/u02/asmdisk4 bs=1024k count=3000
        # /sbin/losetup /dev/loop1 /u02/asmdisk1
        # /sbin/losetup /dev/loop2 /u02/asmdisk2
        # /sbin/losetup /dev/loop3 /u02/asmdisk3
        # /sbin/losetup /dev/loop4 /u02/asmdisk4
        # chmod 660             /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
        # chown oracle:oinstall /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
            => The following not mandatory:
            # raw /dev/raw/raw1  /dev/loop1
            # raw /dev/raw/raw2  /dev/loop2
            # raw /dev/raw/raw3  /dev/loop3
            # raw /dev/raw/raw3  /dev/loop4
            # chmod 660             /dev/raw/raw1 /dev/raw/raw2 /dev/raw/raw3 /dev/raw/raw4
            # chown oracle:oinstall /dev/raw/raw1 /dev/raw/raw2 /dev/raw/raw3 /dev/raw/raw4

        # vi /etc/rc.local
         # Add these lines
         /sbin/losetup /dev/loop1 /u02/asmdisk1
         /sbin/losetup /dev/loop2 /u02/asmdisk2
         /sbin/losetup /dev/loop3 /u02/asmdisk3
         /sbin/losetup /dev/loop4 /u02/asmdisk4
         chmod 660 /dev/loop1
         chmod 660 /dev/loop2
         chmod 660 /dev/loop3
         chmod 660 /dev/loop4
         chown oracle:oinstall /dev/loop1
         chown oracle:oinstall /dev/loop2
         chown oracle:oinstall /dev/loop3
         chown oracle:oinstall /dev/loop4
             raw /dev/raw/raw1 /dev/loop1
             raw /dev/raw/raw2 /dev/loop2
             raw /dev/raw/raw3 /dev/loop3
             raw /dev/raw/raw3 /dev/loop4
             chmod 660 /dev/raw/raw1
             chmod 660 /dev/raw/raw2
             chmod 660 /dev/raw/raw3
             chmod 660 /dev/raw/raw4
             chown oracle:oinstall /dev/raw/raw1
             chown oracle:oinstall /dev/raw/raw2
             chown oracle:oinstall /dev/raw/raw3
             chown oracle:oinstall /dev/raw/raw4

            => The following not mandatory: [Disk Labeling]
            # /etc/init.d/oracleasm createdisk ASMDISK1 /dev/loop1

 

ASM with Multipathing: [Doc ID 294869.1]
-----------------------
ASM produces an error if ASM discovers multiple disk device paths [Two or more HBAs or initiators provide load-balancing & failover ].
Because a single disk can appear multiple times in a multipath configuration, you must configure ASM to discover only the multipath disk, by setting ASM_DISKSTRING equal to the name of the pseudo device that represents the multipath disk.
E.g.
For EMC PowerPath    set ASM_DISKSTRING='/dev/rdsk/emcpower*'
[hp] EMC PowerPath   set ASM_DISKSTRING='/dev/rdsk/cxtydz*'
[SUN/LNX] EMC PowerPath    set ASM_DISKSTRING='/dev/rdsk/emcpowerx'
IBM                             set ASM_DISKSTRING='/dev/rhdiskx*'
Fujitsu                          set ASM_DISKSTRING='/dev/rhdisk*'
[SUN/LNX] Fujitsu     set ASM_DISKSTRING='/dev/FJSVmplb/[r]dsk/mplbI*1s*'
Hitachi HDLM             set ASM_DISKSTRING='/dev/rdsk/dlmfdrvx*'
[hp/SUN] Hitachi HDLM    set ASM_DISKSTRING='/dev/rdsk/cHtydz*'
Veritas DMP                 set ASM_DISKSTRING='/dev/vx/rdsk/[dg-name]/[rlvolname]*'
HP Native MP              set ASM_DISKSTRING='/dev/rdisk/disk*'
HP SecurePath              set ASM_DISKSTRING='/dev/rdsk/cxtydz*'


Install ASM OS packages:
=======================
Download oracleasm packages:
http://www.oracle.com/technetwork/server-storage/linux/asmlib/index-101839.html
[Select the Linux distribution]

Install oracleasm utility packages: [On 64x]
# rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm


# rpm -ivh oracleasm-2.6.18-128.1.6.el5-2.0.5-1.el5.i686.rpm

# rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm

# rpm -ivh oracleasm-2.6.18-128.1.6.el5debug-2.0.5-1.el5.i686.rpm

# rpm -ivh oracleasm-2.6.18-128.1.6.el5-debuginfo-2.0.5-1.el5.i686.rpm

# rpm -ivh oracleasm-2.6.18-128.1.6.el5PAE-2.0.5-1.el5.i686.rpm

# rpm -ivh oracleasm-2.6.18-128.1.6.el5xen-2.0.5-1.el5.i686.rpm

Or: Download & install [If the system registered with ULN]:
--
# up2date -i oracleasm-support oracleasmlib oracleasm-`uname -r`
Note: ASM-Lib API improves CPU utilization.

Configure oracleasm:
==================
The following script (start ASM library, load oracleasm module, fix disk permissions, mount ASM filesystem, scan ASM disk headers).

# /etc/init.d/oracleasm configure

    User: oracle
    Group:dba
start on boot: y
scan on boot:  y

 

Enable oracleasm when server boots up:
# /etc/init.d/oracleasm enable   

# $ORACLE_HOME/bin/localconfig add            #=> This step not required on RAC environment.

 

Disk labeling: [Optional]
# /etc/init.d/oracleasm createdisk DATA1 /dev/sda1    #=> Execute On One node only in RAC.

# /etc/init.d/oracleasm createdisk FRA1 /dev/sdb1   

 Note: For each created disk an entry will be added under:
/dev/oracleasm/disks


Troubleshooting oracleasm:
========================
Ensure the existence of oracleasm:

# lsmod |grep -i asm        #Check that oracleasm kernel module loaded.
oracleasm 48020 1

# insmod /lib/modules/`uname -r`/kernel/drivers/addon/oracleasm/oracleasm.ko    #Load oracleasm kernel module if not exist

# /etc/init.d/oracleasm status

# /etc/init.d/oracleasm enable    #Enable ASMLIB at boot time.

# cat /proc/filesystems |grep -i asm
nodev asmdisk
nodev oracleasmfs

# df -ha |grep asm
oracleasmfs 0 0 0 - /dev/oracleasm

Search for ASM disks:

# /etc/init.d/oracleasm scandisks            #=> You can use v$ASM_DISK if ASM instance available.
# /etc/init.d/oracleasm listdisks            #=> You can use v$ASM_DISK if ASM instance available.
# /etc/init.d/oracleasm querydisk DATA1         #=> You can use v$ASM_DISK if ASM instance available.


Re-name Disk:
# /etc/init.d/oracleasm force-renamedisk DATA1 DATA2
Or:
# /etc/init.d/oracleasm force-renamedisk /dev/sda1 FRA1

Delete Disk Label: [Header]
# /etc/init.d/oracleasm deletedisk DATA1

Note: In RAC oracleasm createdisk command will be issued from one node only.
Note: ASM configuration recorderd in file: /etc/sysconfig/oracleasm

-----------------------------------------------
 Create ASM Instance & Disk Groups:
-----------------------------------------------

 Software installation:
===================

Create OS asmadmin group:
----------------------------
[By ROOT user]
# groupadd asmadmin
# usermod -G oinstall,asmadmin -g dba oracle
Logout/Login Oracle user: [By Oracle user]
# id

Starting from 11gr2 Grid Infrastructure, ASM software will be be installed automatically during Grid Infrastructure installation under GRID HOME.

Software installation: For [11gr1 Backwards]: (not required for 11gr2 +)
-------------------------
[By Oracle user]
Install ASM on a seperate ORACLE HOME [/u01/oracle/11.1.0.6/asm].
# ./runInstaller
    => Select the Advanced Installation option.
    => Install Software Only.
    => Configure ASM.
    => Privileged OS Group: ASM Administrator [asmadmin]


startup the listener.

ASM Instance Creation: [GUI Mode] For [11gr1 & Lower]  Recommended
=====================
Note: ASM in RAC or standalone requires that Cluster Synchronization Services (CSS) is installed and started before ASM becomes available.

Configure CSS service: [that connect DB instance with ASM instance]
# $ORACLE_HOME/bin/localconfig add    #=> This step not required on RAC environment.

[By Oracle]
Run DBCA:
# dbca
  Configure Automatic Storage Management ->
  SYS passowrd & ASM instance parameters ->
  Create New -> create DATA disk group & FRA disk group


ASM Instance Creation: [GUI Mode] For [11gr2 +]  Recommended
=====================
Note: ASM in RAC or standalone requires that Cluster Synchronization Services (CSS) is installed and started before ASM becomes available.

[By ROOT]
# crsctl start resource ora.cssd    #=> For RAC setup.
# crsctl start has            #=> For RAC one Node setup only.

[By Oracle]
# cd $GRID_HOME/bin
# ./asmca
  Provide SYS & ASMSNMP password.
  Specify Disk Group:
    Disk Group Name: DATA1
    Redundancy:     Normal
    Change Disk Discovery Path:    /dev/*
    Select At least TWO disks to comply with the Normal Redundancy.
  Create ASM
  *ASM instance will be created automatically.

   
Note:   Oracle recommends to have only two disk groups for each ASM instance:
       DATA: For DATA & INDEX datafiles, redologs, controlfiles and change tracking files.
       FRA:  For Archives, Flashback logs, Backups, multiplexed redologs & controlfiles.  [At least TWICE the size of DATA]

    Redundancy:
    High     => Data will be mirrored 3 times & stripped on all disks[require at least 3 failure group]. 

    *Failure Group store mirror copies of data inside Disk Group.
    Normal   => Data will be mirrored 2 times & stripped on all disks[require at least 2 failure group].
    External => Data will be not be mirrored nor stripped.
    Note: It's recommended that each failure group disk be on a diffrent disk controller to avoid single point of failure.

  Note: trace files location for ASM will be set under: $ORACLE_BASE/diag/asm/+asm/+ASM/trace


ASM Instance Creation: [Command Line Mode]
=====================
Note: ASM in RAC or standalone requires that Cluster Synchronization Services (CSS) is installed and started before ASM becomes available.

[By ROOT]
# $ORACLE_HOME/bin/localconfig add    #=> For RAC [11gr1 backward] only.
# crsctl start resource ora.cssd    #=> For RAC setup.
# crsctl start has            #=> For RAC one Node setup only.

[By Oracle]
Create ASM instance Password File:
    # orapwd file=$ORACLE_HOME/dbs/orapw+ASM password=changeIt entries=5

Create required directories:
    # mkdir -p $ORACLE_BASE/admin/+ASM
   
# cd $ORACLE_BASE/admin/+ASM
   
# mkdir cdump bdump udump pfile
   


Create the init+ASM.ora file:
    # vi $ORACLE_HOME/dbs/init+ASM.ora
        *.asm_diskgroups='PROD_DB_GRP'
        #*.asm_diskstring='/dev/loop*, /dev/rdsk/*'
        *.asm_diskstring='ORCL:*'
        *.background_dump_dest='/u01/app/admin/+ASM/bdump'
        *.user_dump_dest='/u01/app/admin/+ASM/udump'
        *.core_dump_dest='/u01/app/admin/+ASM/cdump'
        *.instance_type='asm'
        *.db_cache_size = 64M         #buffer cache area is used to cache metadata blocks.
        *.shared_pool_size=128M     #standard memory usage (control structures, etc.) to manage the instance.
        *.large_pool_size=64M
        #store extent maps.
        *.remote_login_passwordfile='exclusive'
        *.asm_power_limit=1


Notes: 

> Memory parameters dba_cache_size, shared/large pool are not required starting from 11gr1 as SGA will be 256M by default which is more than enough for ASM.
> ASM’s SGA is not influenced or impacted by the number of database being serviced, no need to adjust it.
> When using SPFILE ASM automatically adds a disk group to ASM_DISKGROUPS parameter when the disk group is created or mounted.
> when using PFILE you have to add/remove diskgroup manually to ASM_DISKGROUPS parameter.

>"asm_diskstring" parameter is the one will tell ASM where to search to find new ASM disks."asm_power_limit" parameter determines the speed with which re-balancing operations occur when add/remove/resize disk, value from [0 to 11] in 11.2.0.1 Backwards and from [0 to 1024] in 11.2.0.2 onwards. 0 is no re-balance "assumed to be done later".

Startup ASM instance:
--------------------
    # export ORACLE_HOME=/u01/oracle/11.2.0.3/grid
   
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
   
# export ORACLE_SID=+ASM
   
# export PATH=$PATH:$ORACLE_HOME/bin
   
# cd $ORACLE_HOME/dbs
   
# sqlplus "/ as sysdba"    --10g
   
# sqlplus "/ as sysasm"    --11g
   
# sqlplus sys/oracle123@host3:1521/+ASM AS SYSASM    --Remotely using Ezconnect (Passwd file must exist).

    SQL> create spfile from pfile;
    SQL> startup mount


Diskgroups Creation:
==================
Note: Once you create the diskgroup you cannot change it's REDUNDANCY level, you have to create a new diskgroup
      with the required redundancy and move data to it. [Note: 438580.1]

    SQL> create diskgroup data_grp  normal redundancy failgroup data_grp_f1 disk 'ORCL:ASMDISK1'    NAME ASMDSK1 
failgroup data_grp_f2 disk 'ORCL:ASMDISK2' NAME ASMDSK2;
           -->such disks ORCL:ASMDISK* have been created by oracleasm (ASMLIB) in an early step

    SQL> create diskgroup PRDATA external redundancy
                    disk '/dev/sdc1', '/dev/sdc2',
                    '/dev/sdc3', '/dev/sdc4';


    SQL> ALTER DISKGROUP ALL MOUNT;

    SQL> select name, type, total_mb, free_mb from v$asm_diskgroup;
    SQL> select name, path, mode_status, state from v$asm_disk where name is not null;

Notes: 

> When using SPFILE ASM automatically adds a disk group to ASM_DISKGROUPS parameter when the disk group is successfully created or mounted.
> When using PFILE: You should add the diskgroup to ASM_DISKGROUPS parameter to automatic mount diskgroups when instance startup:
      SQL> ALTER SYSTEM SET ASM_DISKGROUPS='PROD_DB_GRP, data_grp, PRDATA';

Add oratab entry:
    # vi /etc/oratab
    +ASM:/u01/app/oracle/product/11.2.0/db_1:Y



---------------------------
 ASM Administration:
---------------------------
 WARNING: DISMOUNTING a disk group will SHUTDOWN any open database instance that using that disk group.

Before you login to the ASM instance, You must export ORACLE_HOME to point to ASM/GRID HOME:

# export ORACLE_HOME=/u01/oracle/11.2.0.3/grid
# export ORACLE_SID=+ASM
# export PATH=$PATH:$ORACLE_HOME/bin
# sqlplus "/ as sysdba"    --[10g]
# sqlplus "/ as sysasm"    --[11g & higher]

Discover disks:
ASM_DISKSTRING initialization parameter should point to the right path.
    SQL> col path for a40
         SELECT NVL(a.name, '[CANDIDATE]')disk_group_name, b.path, b.name disk_file_name,b.failgroup  FROM v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number) ORDER BY a.name;


Create disk group: External redundancy
    SQL> create diskgroup UNDO external redundancy disk 'ORCL:UNDO1' ATTRIBUTE 'COMPATIBLE.RDBMS' = '11.2';
       
Note: COMPATIBLE.RDBMS= minimum DB version to use the group. COMPATIBLE.RDBMS must not be bigger than COMPATIBLE.ASM.

Create disk group: Normal redundancy
    SQL> create diskgroup test_failgroup normal redundancy
                    FAILGROUP fg1 DISK 'ORCL:ASMDSK1' NAME ASMDSK1
                    FAILGROUP fg2 DISK 'ORCL:ASMDSK2' NAME ASMDSK2 [ATTRIBUTE 'au_size'='4M']; 

                    --au_size from 1 to 64m default is 1m, Oracle recommend 4m.

Create disk group: High redundancy
    SQL> create diskgroup test_failgroup high redundancy
              FAILGROUP TEST_fg1 DISK '/dev/oracleasm/disks/ASMDSK1' NAME TEST_DSK1
              FAILGROUP TEST_fg2 DISK '/dev/oracleasm/disks/ASMDSK2' NAME TEST_DSK2
              FAILGROUP TEST_fg3 DISK '/dev/oracleasm/disks/ASMDSK3' NAME TEST_DSK3;

Note: Oracle recommend AU size=4m for better I/O throughput, increase file size limits, reduce database open time.

Mount Diskgroup:    SQL> ALTER DISKGROUP DATA1 MOUNT [FORCE];    --[FORCE] will mount [Normal/High redundant] diskgroup in case there is missing/damaged disk.

Un-Mount Diskgroup:    SQL> ALTER DISKGROUP DATA1 DISMOUNT [FORCE];    --[FORCE] will lead to termination of DB instance associated with ASM.

Mount all Diskgroups:    SQL> ALTER DISKGROUP ALL MOUNT [FORCE];

Un-Mount all Diskgroups:SQL> ALTER DISKGROUP ALL DISMOUNT [FORCE];

Drop diskgroup:        SQL> ALTER DISKGROUP DATA1 dismount force;
                                  SQL> DROP DISKGROUP DATA1 FORCE INCLUDING CONTENTS;

Rename diskgroup: 11gR2    # renamedg dgname=data1 newdgname=fra verbose=true asm_diskstring='/dev/sdb'
                                               asm_diskstring => should point to the disk path, you can use /dev/sdb*

Drop disk from diskgroup: SQL>ALTER DISKGROUP DATA1 drop disk ASMDSK2;
                                            Note: Use the disk name appears in v$asm_diskgroup NOT the path.

Drop file:[Not accessed]    SQL> ALTER DISKGROUP DATA1 DROP FILE '+data/orcl/my_undo_ts';

Un-Drop disk [Pending]:    SQL> ALTER DISKGROUP DATA1 UNDROP DISKS;
Add disk to diskgroup:        SQL> ALTER DISKGROUP DATA1 add disk 'ORCL:ASMDSK5'; 

Note: ASM re-balancing operations will move data onto the new disk. for better performance add all new disks at once.

Resize all disks:       SQL> ALTER DISKGROUP data1 RESIZE DISKS IN FAILGROUP failgrp1 SIZE 100G;

Create tablespace under ASM diskgroup: 

                                 SQL> CREATE TABLESPACE my_ts DATAFILE '+DATA1' SIZE 100M AUTOEXTEND ON;

Re-balance Power:    SQL> ALTER DISKGROUP DATA1 REBALANCE POWER 5;
                                  SQL> alter diskgroup DATA1 add disk '/dev/rdsk/c3t19d39s4' rebalance power 11; 

Note: Manual rebalance is not required, as Oracle automatically rebalances disk groups when their configuration changes.

Check errors:        SQL> ALTER DISKGROUP data1 check all;

Repair errors:        SQL> ALTER DISKGROUP data1 check all repair;

Add an alias:         SQL> alter diskgroup data add alias '+DATA/orcl/controlfile/control01.ctl' for '+DATA/orcl/controlfile/current.263.852473145';

V$ASM_DISKGROUP:     SQL> select NAME,STATE,TYPE,TOTAL_MB,FREE_MB,USABLE_FILE_MB,OFFLINE_DISKS from V$ASM_DISKGROUP; 

Note: Quering v$ASM_DISKGROUP is an expensive operation as it search disks each time, you can substitute it with V$ASM_DISKGROUP_STAT. 

V$ASM_DISK:         SQL>      col path for a40
                                                   select FAILGROUP,name,path,MODE_STATUS,STATE,REDUNDANCY,

                                                   OS_MB,TOTAL_MB,FREE_MB from v$asm_disk order by 1;
 Note: Quering v$ASM_DISK is an expensive operation as it search disks each time, you can substitute it with V$ASM_DISK_STAT.

V$ASM_FILE:         SQL> select GROUP_NUMBER,FILE_NUMBER,BYTES/1024/1024,TYPE,REDUNDANCY,STRIPED from V$ASM_FILE;
                                  Show all files paths under ASM instance:
                                  SQL> SELECT concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path FROM (SELECT g.name gname, a.parent_index pindex, a.name aname, a.reference_index rindex FROM v$asm_alias a, v$asm_diskgroup g WHERE a.group_number = g.group_number) START WITH (mod(pindex, power(2, 24))) = 0 CONNECT BY PRIOR rindex = pindex;

V$ASM_OPERATION:    Displays long running ASM operations.
V$ASM_ATTRIBUTE:
    View diskgroups attributes.
V$ASM_CLIENT:      SQL> select INSTANCE,NAME,STATUS,SOFTWARE_VERSION,COMPATIBLE_VERSION from v$asm_client;
                                     --Client DB instances are the instances connect to ASM instance.

------------------------
 Database ON ASM:
------------------------
 Once ASM instance created all databases on the same host can use it.

Create New Database on ASM:
==========================
GUI: DBCA
When creating a new DB use DBCA to create a new DB select "ASM" as the storage mechanism, when ask for DB files location select OMF "+DATA",
 select "+FRA" for Flash Recovery Area.

Command line:
Ensure that the parameter file include the following parameters with same format:

    *.control_files=(+DATA, +FRA)
    *.db_recovery_file_dest=+FRA
    *.db_recovery_file_dest_size=2147483648
    *.db_create_file_dest=+DATA
    *.db_create_online_log_dest_1=+FRA
    *.db_create_online_log_dest_2=+DATA    -- optional
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.background_dump_dest=/u01/oracle/admin/orcl/bdump
    *.core_dump_dest=/u01/oracle/admin/orcl/cdump
    *.user_dump_dest=/u01/oracle/admin/orcl/udump



   SQL> create database DEVDB
    user sys identified by oracle123
    user system identified by oracle123
    maxdatafiles 1021
    maxinstances 4
    maxlogfiles 16
    character set WE8MSWIN1252
    national character set AL16UTF16
    datafile '+DATA' size 1000M autoextend off extent management local
    sysaux datafile '+DATA' size 1000m
    default temporary tablespace temp tempfile '+DATA' size 4000m uniform size 1m
    undo tablespace undo_rbs datafile '+DATA' size 4000m
    logfile
    ('+DATA','+FRA') size 300M,
    ('+DATA','+FRA') size 300M,
    ('+DATA','+FRA') size 300M,
    ('+DATA','+FRA') size 300M;

   => Run additional scripts such catalog.sql, catproc.sql and catexp.sql from the $ORACLE_HOME/rdbms/admin

Add Tablespace|Datafile:
-----------------------
   SQL> alter tablespace [tablespace_name] add datafile '+data' size 1g;
Move Datafile to ASM Diskgroup: [From ASM/Non-ASM to ASM]
------------------------------
   SQL>  ALTER DATABASE DATAFILE '+DATA1/orcl/datafile/aa.256.852212097' offline;
   RMAN> COPY DATAFILE '+DATA1/orcl/datafile/aa.256.852212097' to '+loop';
   RMAN> switch  datafile '+DATA1/orcl/datafile/aa.256.852212097' to copy;
   SQL>  select file_name from dba_data_files;
   SQL>  RECOVER datafile '+LOOP/orcl/datafile/aa.257.852212479';
   SQL>  ALTER DATABASE DATAFILE '+LOOP/orcl/datafile/aa.257.852212479' online;


Migrate the database to ASM:
=========================
Migrate already exit DB to use ASM: (Using RMAN)

    1- Disable Block change tracking if enabled:
        SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
     - If the database is Physical Standby DB, Stop Managed Recovery :
        SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    2- Generate PFILE:
        SQL> create pfile='/u01/oracle/11.2.0.3/db/dbs/initorcl.ora' from spfile;

    3- Edit the pfile to include the following parameters pointing to ASM diskgroups:
        *.control_files=(+DATA, +FRA)        
        *.db_recovery_file_dest=+FRA          
        *.db_recovery_file_dest_size=536870912000  #500GB
        *.db_create_file_dest=+DATA
             
        *.db_create_online_log_dest_1=+FRA
        *.db_create_online_log_dest_2=+DATA      

    4- Backup the database as COPY to +DATA:
        # rman target /
        RMAN> run{                                  
              allocate channel c1 device type disk;
              allocate channel c2 device type disk;
              allocate channel c3 device type disk;
              BACKUP AS COPY DATABASE FORMAT '+DATA';  
              }

    5- Shutdown & Migrate the DB using RMAN: [Downtime STEP]
        RMAN> shutdown immediate;
        RMAN> startup nomount pfile=/u01/oracle/11.2.0.3/db/dbs/initorcl.ora
        RMAN> restore controlfile from '/database/datafiles/orcl/control01.ctl';
        RMAN> alter database mount;
        RMAN> SWITCH DATABASE TO COPY;
        RMAN> run{                                  
              allocate channel c1 device type disk;
              allocate channel c2 device type disk;
              allocate channel c3 device type disk;
              recover database;
              }   

        RMAN> alter database open;

    6- Migrate TEMP FILES:
        SQL> alter tablespace TEMP add tempfile '+DATA' SIZE 10M;
        SQL> alter database tempfile '/database/datafiles/orcl/temp01.dbf' drop; 

    7- Migrate REDOLOGS:
        >Run the following procedure to automatically drop/re-create logfiles:
        SQL> declare
              cursor orlc is
              select lf.member, l.bytes from v$log l, v$logfile lf
              where l.group# = lf.group# and lf.type = 'ONLINE' order by l.thread#, l.sequence#;
              type numTab_t is table of number index by binary_integer;
              type charTab_t is table of varchar2(1024) index by binary_integer;
              byteslist numTab_t; namelist charTab_t;
              procedure migrateorlfile(name IN varchar2, bytes IN number) is
                 retry number;
                 stmt varchar2(1024);
                 als varchar2(1024) := 'alter system switch logfile';
              begin
                 select count(*) into retry from v$logfile;
                 stmt := 'alter database add logfile size ' || bytes;
                 execute immediate stmt;
                 stmt := 'alter database drop logfile ''' || name || '''';
                 for i in 1..retry loop
                    begin execute immediate stmt;
                    exit;
                    exception
                       when others then
                        if i > retry then raise;
                        end if;
                     execute immediate als;
                   end;
                 end loop;
               end;
               begin
               open orlc;
               fetch orlc bulk collect into namelist, byteslist;
               close orlc;
               for i in 1..namelist.count loop migrateorlfile(namelist(i), byteslist(i));
               end loop;
               end;
               /


        >In case there are still logfiles didn't move, do switch logfiles many times then re-run the procedure again:
            SQL> select member from v$logfile;
                 alter system switch logfile;
                 alter system switch logfile;

                 -- Run the above procedure again to move the rest files ... or:
                 -- alter database drop logfile group 3;
                 -- alter database add logfile group 3 ('+data', '+fra') size 100m;

        >Check if there still files in old location:
            SQL> select file_name from dba_data_files;
                            select file_name from dba_temp_files;
                            select member from v$logfile;

    8- Delete original old datafiles:
        RMAN> DELETE NOPROMPT COPY OF DATABASE;
    9- REMOVE old Redologs, TEMP, control files manually from FS.

    10-Enable back the block change tracking:
        SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
      -If the database is Physical Standby:
        SQL> RECOVER MANAGED STANDBY DATABASE DISCONNECT;
      -In case you want to migrate old archives/ RMAN backups/ Copy backups:
        RMAN> run{
             ALLOCATE CHANNEL c1 DEVICE TYPE DISK;
             ALLOCATE CHANNEL c2 DEVICE TYPE DISK;
             ALLOCATE CHANNEL c3 DEVICE TYPE DISK;
             ALLOCATE CHANNEL c4 DEVICE TYPE DISK;
             BACKUP AS COPY ARCHIVELOG ALL DELETE INPUT;
             BACKUP BACKUPSET ALL DELETE INPUT;
             BACKUP AS COPY DATAFILECOPY ALL DELETE INPUT;
            }

-------------------------------------
 Transfer Files FROM/TO ASM:
-------------------------------------

Transfer files from/to ASM using [RMAN]
-------------------------------------------
 Copy datafile From ASM to OS:
   SQL>  ALTER DATABASE DATAFILE '+LOOP/orcl/datafile/aa.257.852212479' offline;
   RMAN> COPY DATAFILE   '+LOOP/orcl/datafile/aa.257.852212479' to '/database/datafiles/orcl/bb.dbf';
   RMAN> switch datafile '+LOOP/orcl/datafile/aa.257.852212479' to copy;
   SQL>  RECOVER datafile '/database/datafiles/orcl/bb.dbf';
   SQL>  ALTER DATABASE DATAFILE '/database/datafiles/orcl/bb.dbf' online;

 Copy Controlfile & Archives from to ASM to OS:
   RMAN> copy current controlfile to '/u01/controlfile.ctl';
   RMAN> copy archivelog '+diskgr/dbname/archivelog/archivelog.n' to '/u01/archivelog.n';


 Copy the SPFILE from/To ASM:
   SQL> CREATE pfile='/u01/oracle/11.2.0.3/db/dbs/init+ASM.ora' from spfile;
   SQL> CREATE SPFILE ='+DATA/asmspfile.ora' FROM PFILE ='/u01/oracle/11.2.0.3/db/dbs/init+ASM.ora';
 


Transfer files from/to ASM using [ASMCMD]
----------------------------------------------
   # export ORACLE_HOME=$GRID_HOME
   # asmcmd -p
     ASMCMD> cp +loop/orcl/datafile/AA.256.852292031 /u01/aa.dbf
     ASMCMD> cp /u01/aa.dbf +loop/orcl/datafile/AA.dbf


Note: ASM instance should be up.
Note: Use fully qualified names.

Transfer files from/to ASM using [DBMS_FILE_TRANSFER]
--------------------------------------------------------------
    [On Database Instance]
    > Create the source directory [Where file located]
      SQL> create or replace directory SOURCE_DIR as '+DATA/orcl/datafile/';
    > Create the destination directory [where file will be copied]:
      SQL> create or replace directory ORACLE_DEST as '/u02';
    > Execute the dbms_file_transfer package: [Tablespace of copied datafile should be offline]
      SQL>    BEGIN
                         dbms_file_transfer.copy_file(source_directory_object =>      'SOURCE_DIR',
                         source_file_name =>             'USERS.259.852463895',
                         destination_directory_object => 'ORACLE_DEST',
                         destination_file_name =>        'users.dbf');
                         END;
                         /


------------------------------------------------------------
Managing files inside ASM using ASMCMD utility:
 ------------------------------------------------------------

ASMCMD is a command line utility allow you to manage the files inside ASM disks like any Filesystem API.
*Before using ASMCMD utility, ASM instance must be started and Diskgroups are mounted.

Note: Before using ASMCMD you must:
    # export ORACLE_SID=+ASM
    # export ORACLE_HOME=[ORACLE_HOME that house asmcmd command "GRID_HOME"]
    # asmcmd -p    => -p to show the current path in the prompt

    Or: in case the ASM entry already exist in /etc/oratab
    # . oraenv
    +ASM

    Or: you can use the NON-Interactive "script" mode: e.g.
    # asmcmd ls -l +data/orcl/datafile

File Management commands:

-----------------------------
cd     Change the current directory to the specified directory.
du     Display the total disk space occupied by ASM files in the specified ASM directory and all of its sub-directories, recursively.
find   List the paths of all occurrences of the specified name (with * wildcards) under the specified directory:
           ASMCMD [+] > find -t datafile + *sys*
           +data/DBADEV/DATAFILE/SYSAUX.257.655588275
           +data/DBADEV/DATAFILE/SYSTEM.258.655588211
           + sign is like / in linux, you can search in +DATA or +FRA
           -t option specifies the type of file, templates can found in TYPE column in V$ASM_FILE.

ls      List the contents of an ASM directory, the attributes of the specified file, or the names and attributes of all disk groups.
lsof   List the open files.
mkalias Create an alias for system-generated filenames:  ASMCMD> mkalias REGISTRY.253.851615101 spfile.ora
mkdir   Create ASM directories.
pwd      Display the path of the current ASM directory.
rm        Delete the specified ASM files or directories.
rmalias  Delete the specified alias, retaining the file alias points to.

File Access commands:

----------------------
chgrp      Change the group of a file.     e.g. asmcmd> chgrp –R backup_users +data/* ;
chmod    Change permissions of a file. e.g. ASMCMD> chmod –R 600 +data/orcl
chown    Change the owner of a file.    e.g. ASMCMD> chown  -R db1:backup_user +data/orcl/datafile/*  &
grpmod  Add or removes users from an existing user group.
lsgrp       List ASM access control list (ACL) groups.
lsusr       List users in a disk group.
mkgrp   Create a new user group.    e.g. asmcmd> mkgrp dg1 mygroup ;
mkusr    Add a user to disk group.    e.g. asmcmd> mkusr dg1 myuser ;
passwd  Change the password of a user in a disk group.
rmgrp    Delete a user group.            e.g. asmcmd> rmgrp dg1 mygroup ;
rmusr     Delete a user from a disk group.    e.g. asmcmd> rmusr dg1 myuser ;

Create a Link/Alias for an ASM file:
--------------------------------------
[Create Alias From ASM instance]
SQL> alter diskgroup ERP_DATA add alias '+ERP_DATA/FZTEST/CONTROLFILE/controlfile.ctl' for '+ERP_DATA/FZTEST/CONTROLFILE/Current.457.992888495';

[Create AliasFrom ASMCMD]
ASMCMD> mkalias +ERP_DATA/FZTEST/CONTROLFILE/Current.457.992888495  +ERP_DATA/FZTEST/CONTROLFILE/controlfile.ctl 

[Remove an Alias]
[WARNING:] To remove an alias use "rmalias" NOT "rm". Deleting an alias with "rm" command will delete the original file that refers to as well.

ASMCMD> rmalias +ERP_DATA/FZTEST/CONTROLFILE/controlfile.ctl 

Disk Group Management commands:
-------------------------------------
chdg      Change a disk group (add, drop, or rebalance).
chkdg    Check or repairs a disk or failure group.
cp          Enable you to copy files between ASM disk groups on a local/remote instances, ASM diskgroup from/to OS filesystem.
dgdrop  Drop a disk group.    e.g. asmcmd> dgdrop -r dg1 ;
lsattr      List the attribute and attribute settings of a disk group.
lsdg       List disk groups and their information.    e.g. asmcmd> lsdg -g
lsdsk     List disks visible to ASM and it's groups.
             -k option displaysthe TOTAL_MB, NAME, FAILGROUP, and PATH info.
             -t option display when the disk was added to the diskgroup, when it was mounted, how long the repair timer is.
             -s option display Read/Write Information.
lsgrp      List ASM access control list (ACL) groups.
md_backup   Create a backup of all of the mounted disk groups.
md_restore    Restore disk groups from a backup.
mkdg     Create a disk group.
mount    Mount a disk group.    e.g. asmcmd> mount FRA ;
offline    Offline a disk or a failure group that belongs to a disk group.
online     Online a disk or a failure group that belongs to a disk group.
rebal       Re-balance a disk group.
remap     Repair a range of physical blocks on a disk.
renamedg  Rename diskgroup.(datafiles will need to be relocated manually) 

                  e.g. ASMCMD> renamedg dgname=[olddiskgroup] newdgname=[newdiskgroup]
setattr     Set attributes in an ASM disk group.
umount  Dismount a disk group.    e.g. asmcmd> umount FRA ;

Instance Management commands
:

---------------------------------
iostat       Display statistics for mounted devices.
lsct          List information about current ASM clients.
lsop         List the current operations on a disk group, database, or ASM instance.
lspwusr   List the users from an Oracle PWFILE file.
orapwusr Add, drop, or change an Oracle PWFILE user.
shutdown Shut down an ASM instance.
spget       Locate the spfile on asm.
spcopy    Copy SPFILE.
startup     Start up an ASM instance.

Using ASMCMD in Script mode:
# asmcmd rm +DATA/DBATOOLS/archivelog/2008_03_10/thread_1*
 

REFERENCES:
Complete ASM documentation for 11.2.03: http://docs.oracle.com/cd/E11882_01/server.112/e18951.pdf
Master Note for Automatic Storage Management (ASM) [Doc ID 1187723.1]
ASM Best Practices. [Doc ID 265633.1]

ASM Tuning by Thiru Vadivelu: http://www.orafaq.com/papers/tuning_asm.pdf
Database High Availability Best Practices | Configuring Storage: 
https://docs.oracle.com/cd/E11882_01/server.112/e10803/config_storage.htm#HABPT4790