Wednesday, March 6, 2013

Manually create acfs filesystem Oracle 11gr2

You can of course use asmca to create acfs filesystem or Oracle Grid control.
Sometimes it's handy to know te command's to create an acfs.
First you have to know the disk that are available.
There are diffirent ways to find this out.
For instance in asmcmd use lsdsk --candidate -p
I use sqlplus for this.

SQL> col PATH format a80
SQL> sel lines 300
SQL> select PATH,OS_MB,TOTAL_MB from  V$ASM_DISK
PATH                                                                                  OS_MB   TOTAL_MB
-------------------------------------------------------------------------------- ---------- ----------
/dev/mapper/BESSVC02_16_00033bp2                                                      16833          0
/dev/mapper/BOXSVC02_16_0002a3p2                                                      16833          0
/dev/mapper/BESSVC02_17_00033cp1                                                      17804          0
/dev/mapper/BESSVC02_17_00033ep1                                                      17804          0
/dev/mapper/BESSVC02_17_00033dp1                                                      17804          0
/dev/mapper/BOXSVC02_17_0002a5p1                                                      17804          0
/dev/mapper/BOXSVC02_17_0002a6p1                                                      17804          0
/dev/mapper/BOXSVC02_17_0002a4p1                                                      17804          0
/dev/mapper/BESSVC02_1_00033bp1                                                             980        980
/dev/mapper/BOXSVC02_1_0002a3p1                                                            980        980
/votediskseco2/odttcl21/nfs_votedisk_odttcl21                                                 1024       1024

The last 3 disks are used for the voting disks.
$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   9bf48f500c1b4fd5bf2b6dd35650081a (/dev/mapper/BOXSVC02_1_0002a3p1) [DGGRID]
 2. ONLINE   6673b45feb564f9fbfca2772012089b9 (/dev/mapper/BESSVC02_1_00033bp1) [DGGRID]
 3. ONLINE   35fc10a70e534fc7bf7931f65cf309f7 (/votediskseco2/odttcl21/nfs_votedisk_odttcl21) [DGGRID]
Located 3 voting disk(s).


First we have to create a diskgroup and as this is a stretch cluster we need
a failgroup

SQL>CREATE DISKGROUP DGGACFS1 NORMAL REDUNDANCY
         FAILGROUP BEST DISK
        '/dev/mapper/BESSVC02_16_00033bp2' SIZE 16833 M,
        '/dev/mapper/BESSVC02_17_00033cp1' SIZE 17804 M,
        '/dev/mapper/BESSVC02_17_00033dp1' SIZE 17804 M,
        '/dev/mapper/BESSVC02_17_00033ep1' SIZE 17804 M 
       FAILGROUP BOXT DISK
        '/dev/mapper/BOXSVC02_16_0002a3p2' SIZE 16833 M,
        '/dev/mapper/BOXSVC02_17_0002a4p1' SIZE 17804 M,
        '/dev/mapper/BOXSVC02_17_0002a5p1' SIZE 17804 M,
        '/dev/mapper/BOXSVC02_17_0002a6p1' SIZE 17804 M
ATTRIBUTE 'compatible.rdbms' = '11.2.0.0.0', 'compatible.asm' = '11.2.0.0.0', 'compatible.advm' = '11.2.0.0.0'
/
Now we have to create a volume. I'm  going to make one of 28GYG

SQL>alter diskgroup DGGACFS1 ADD VOLUME V_ODTTCL21 size 28G;

Before we make the acfs we need the know the name of the automatically created advm
volume device in /dev/asm

ls -altr /dev/asm
brwxrwx---  1 root asmadmin 252,      0 Mar  1 15:01 .asm_ctl_spec
brwxrwx---  1 root asmadmin 252,     10 Mar  1 15:01 .asm_ctl_vbg0
brwxrwx---  1 root asmadmin 252,     11 Mar  1 15:01 .asm_ctl_vbg1
brwxrwx---  1 root asmadmin 252,     12 Mar  1 15:01 .asm_ctl_vbg2
brwxrwx---  1 root asmadmin 252,     13 Mar  1 15:01 .asm_ctl_vbg3
brwxrwx---  1 root asmadmin 252,     14 Mar  1 15:01 .asm_ctl_vbg4
brwxrwx---  1 root asmadmin 252,     15 Mar  1 15:01 .asm_ctl_vbg5
brwxrwx---  1 root asmadmin 252,     16 Mar  1 15:01 .asm_ctl_vbg6
brwxrwx---  1 root asmadmin 252,     17 Mar  1 15:01 .asm_ctl_vbg7
brwxrwx---  1 root asmadmin 252,     18 Mar  1 15:01 .asm_ctl_vbg8
brwxrwx---  1 root asmadmin 252,      1 Mar  1 15:01 .asm_ctl_vdbg
brwxrwx---  1 root asmadmin 252,      2 Mar  1 15:01 .asm_ctl_vmb
brwxrwx---  1 root asmadmin 252, 192001 Mar  5 15:11 v_odttcl21-375

Now we create acfs and after that we register the mount pointt.

$ /sbin/mkfs -t acfs -b 4k /dev/asm/v_odttcl21-375 -n "V_ODTTCL21"
mkfs.acfs: version                   = 11.2.0.2.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/v_odttcl21-375
mkfs.acfs: volume size               = 30064771072
mkfs.acfs: Format complete.

-t acfs Specifies the type of file system on Linux. acfs designates the Oracle ACFS type
-b The default block size is 4K and is the only size supported in 11g Release 2 (11.2).
-n Specifies the name for the file system.

Now we register the mount point.

$ /sbin/acfsutil registry -f -a /dev/asm/v_odttcl21-375 /u01/app/oracle/user_projects
acfsutil registry: mount point /u01/app/oracle/user_projects successfully added to Oracle Registry

-a  Specifies to add the device, mount point, and associated moptions to the Oracle ACFS mount registry.
-f  This is used in combination with -a when the specified device might exist in the registry and the  administrator wants to replace the registration

so we now created the mountpoint /u01/app/oracle/user_projects.
oracle@test2008:CRS:/u01/app/oracle/user_projects
$ df -h .
Filesystem            Size  Used Avail Use% Mounted on
/dev/asm/v_odttcl21-375
                       28G  257M   28G   1% /u01/app/oracle/user_projects

And there you have it an acfs filesystem created manually without the gui.

Thats it.

Silent install Oracle client 11gr2

I was working from home and busy with creating a BI cluster.
This als involed installing an Oracle Client.
I could't use the gui because my reflection X sessie wasn't working.
So is use the silent installer so i didn't have to use the gui.

First you copied the client_install.rsp  to /tmp

 $ cp /install/install_sw/client/64bit/11.2.0.3/client/response/client_install.rsp /tmp

Now we edit the client_install.rsp
In our client_instaal.rsp i edit the following parameters

UNIX_GROUP_NAME=dba
INVENTORY_LOCATION=/u01/app/oracle/oraInventory
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/cl_100
ORACLE_BASE=/u01/app/oracle
oracle.install.client.installType=Custom
oracle.install.client.customComponents= "oracle.sqlj:11.2.0.3.0",

                                                                       "oracle.rdbms.util:11.2.0.3.0",
                                                                       "oracle.sqlplus:11.2.0.3.0",
                                                                       "oracle.dbjava.jdbc:11.2.0.3.0",
                                                                       "oracle.rdbms.oci:11.2.0.3.0",
                                                                       "oracle.network.client:11.2.0.3.0",

                                                                       "oracle.has.client:11.2.0.3.0"
oracle.installer.autoupdates.option=SKIP_UPDATES

The other parmeters i left untouched.

the i started the installation of the Oracle client.

$ ./runInstaller -silent -ignorePrereq -responseFile /tmp/client_install.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 3175 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3071 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-02-25_01-27-35PM. Please wait
...oracle@test10005:+ASM1:/install/install_sw/client/64bit/11.2.0.3/client
$ You can find the log of this install session at:
/u01/app/oracle/oraInventory/logs/installActions2013-02-25_01-27-35PM.log
The installation of Oracle Client was successful.
Please check '/u01/app/oracle/oraInventory/logs/silentInstall2013-02-25_01-27-35PM.log' for more details.
Successfully Setup Software.


And so easy was it

Thats it.

Guarantee restore point Oracle 11gr2

As of 11g we can create a restore point in the database
without having to set the database in flasback mode.

This can be very usefull during a change.
You create the restore poin before starting the change.
If for some reason you must rollback you can use
the restore point to flashback the database.
Which is much, much quicker then a restore of the database.

You have 2 choices a normal restore point or a guarantee
restore point.
If you use a normal restore point you can rewind the database
based on the value of the parameter db_flashback _retention_target.
deafult is 1 day.

A guaranteed restore point will never go out of the flashback area and
does not use the parameter db_flashback _retention_target.
It will never be deleted even if there is space pressure in the flashback area.
You must manually delete the guaranteed restore point to reclaim space in the
flashback area.

To create a normal restore point, you must have either
SELECT ANY DICTIONARY or FLASHBACK ANY TABLE privilege.
To create a guaranteed restore point, you must have the
SYSDBA system privileges.
To view or use a restore point, you must have the
SELECT ANY DICTIONARY or FLASHBACK ANY TABLE
system privilege or the SELECT_CATALOG_ROLE role.

oke lets start with creating a guaranteed restore point.

SQL> conn / as sysdba
SQL> create restore point demo1 guarantee flashback database;
Restore point created.

SQL> select NAME,GUARANTEE_FLASHBACK_DATABASE,SCN from v$restore_point
NAME       GUARANTEE                            SCN
---------- --------- ------------------------------
DEMO1      YES                          21093820677
SQL> @demo1.sql
select NAME,FLASHBACK_ON from v$database
NAME       FLASHBACK_ON
---------- ------------------------------------------------------
ODBA6      RESTORE POINT ONLY
select * from v$flashback_database_logfile
Session altered.

Log No Thread No Seq No NAME                                                                    Size (KB)             First Chg No FIRST_TIME
------ --------- ------ -------------------------------------------------- ------------------------------ ------------------------ --------------------------
     1         1      1 +DGOFRA1/odba6_01/flashback/log_1.1066.809360253                        268435456           21,093,820,661 06 MAR 2013 14:17:34
     2         1      1 +DGOFRA1/odba6_01/flashback/log_2.1005.809360257                        268435456                        0
     3         2      1 +DGOFRA1/odba6_01/flashback/log_3.1025.809360259                        268435456           21,093,820,671 06 MAR 2013 14:17:40
     4         2      1 +DGOFRA1/odba6_01/flashback/log_4.472.809360263                         268435456                        0


The flashback logs are created now by the background process RVWR
and are restored in the flashback area.

Now we make some changes to the database

 SQL> @demo3.sql
Table created.

Index created.
select count(*) from aap
                      COUNT(*)
------------------------------
                        100000

Now lets flasback the database to before the change

 SQL> flashback database to restore point demo1;
flashback database to restore point demo1
*
ERROR at line 1:
ORA-38757: Database must be mounted and not open to FLASHBACK.
as you see the database must be in mount state.
let stop the database
SQL> !srvctl stop database -d ODBA6_01

start 1 instance srvctl start instance
SQL> !srvctl start instance -d ODBA6_01 -i ODBA61 -o mount
SQL> conn / as sysdba

Now flashback database to restore point demo1
SQL> flashback database to restore point demo1;
Flashback complete.
SQL> alter database open resetlogs;
start the seconde instance.
SQL !srvctl start instance -d ODBA6_01 -i ODBA62
SQL!srvctl status database -d odba6_01
Instance ODBA61 is running on node test2001
Instance ODBA62 is running on node test2002

SQL> select count(*) from aap;
select count(*) from aap
                     *
ERROR at line 1:
ORA-00942: table or view does not exist

We flashback the database as it was before the change.

Now we have to drop the restore point as this is a guaranteed restore point
you must delete or otherwise it will exsist forever.

SQL> drop restore point demo1;
Restore point dropped.

You can also use rman for the whole process if you want.

RMAN> connect target /
connected to target database: ODBA6 (DBID=459400142)
RMAN> create restore point demo1 guarantee flashback database;
using target database control file instead of recovery catalog
RMAN> list restore point all;
SCN              RSP Time            Type       Time                Name
---------------- ------------------- ---------- ------------------- ----
21093823902                          GUARANTEED 06-03-2013:14:37:46 DEMO1

We do the change again and then the same when using
sqlplus to flashback the database it must be in the mount state;

$ srvctl stop database -d ODBA6_01

start 1 instance srvctl start instance
$srvctl start instance -d ODBA6_01 -i ODBA61 -o mount

RMAN> connect target /
RMAN> flashback database to restore point DEMO1;
Starting flashback at 04-MAR-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11instance=ODBA61 device type=DISK
starting media recovery
media recovery complete, elapsed time: 00:00:04
Finished flashback at  04-MAR-13
RMAN> alter database open resetlogs;
database opened
RMAN> list restore point all;
SCN              RSP Time            Type       Time                Name
---------------- ------------------- ---------- ------------------- ----
21093823902                          GUARANTEED 06-03-2013:14:37:46 DEMO1
oke now we drop the restore point.

RMAN> drop restore point demo1
Thats it.


.

Add node from rac cluster

In the previous post i explain how to delete a node.
Now i'm going to add a node to an existing cluster.
First we have to check the configuration.
We do this by comparision an existing node with the new one.
As user oracle( It can be also user grid if you installed the software as user grid).

$ cluvfy comp peer -n test1001 -refnode test1002 -r 11gr2
Verifying peer compatibility
Checking peer compatibility...
Compatibility check: Physical memory [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      12.5GB (1.31072E7KB)      12.5GB (1.31072E7KB)      matched
Physical memory check passed
Compatibility check: Available memory [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      11.1693GB (1.17119E7KB)   11.153GB (1.1694784E7KB)  mismatched
Available memory check failed
Compatibility check: Swap space [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      3GB (3145720.0KB)         3GB (3145720.0KB)         matched
Swap space check passed
Compatibility check: Free disk space for "/tmp" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      3.3984GB (3563520.0KB)    3.3867GB (3551232.0KB)    mismatched
Free disk space check failed
Compatibility check: User existence for "root" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      root(0)                   root(0)                   matched
User existence for "root" check passed
Compatibility check: Group existence for "oinstall" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      oinstall(54321)           oinstall(54321)           matched
Group existence for "oinstall" check passed
Compatibility check: Group existence for "dba" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
 ------------  ------------------------  ------------------------  ----------
  test1001      dba(4006)                 dba(4006)                 matched
Group existence for "dba" check passed
Compatibility check: Group membership for "root" in "oinstall (Primary)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      no                        no                        matched
Group membership for "root" in "oinstall (Primary)" check passed
Compatibility check: Group membership for "root" in "dba" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      no                        no                        matched
Group membership for "root" in "dba" check passed
Compatibility check: Run level [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      3                         3                         matched
Run level check passed
Compatibility check: System architecture [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      x86_64                    x86_64                    matched
System architecture check passed
Compatibility check: Kernel version [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      2.6.18-308.16.1.0.1.el5xen  2.6.18-308.16.1.0.1.el5xen  matched
Kernel version check passed
Compatibility check: Kernel param "semmsl" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      250                       250                       matched
Kernel param "semmsl" check passed
Compatibility check: Kernel param "semmns" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      32000                     32000                     matched
Kernel param "semmns" check passed
Compatibility check: Kernel param "semopm" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      100                       100                       matched
Kernel param "semopm" check passed
Compatibility check: Kernel param "semmni" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      142                       142                       matched
Kernel param "semmni" check passed
Compatibility check: Kernel param "shmmax" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      4398046511104             4398046511104             matched
Kernel param "shmmax" check passed
Compatibility check: Kernel param "shmmni" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      4096                      4096                      matched
Kernel param "shmmni" check passed
Compatibility check: Kernel param "shmall" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      1073741824                1073741824                matched
Kernel param "shmall" check passed
Compatibility check: Kernel param "file-max" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      6815744                   6815744                   matched
Kernel param "file-max" check passed
Compatibility check: Kernel param "ip_local_port_range" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      9000 65500                9000 65500                matched
Kernel param "ip_local_port_range" check passed
Compatibility check: Kernel param "rmem_default" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      262144                    262144                    matched
Kernel param "rmem_default" check passed
Compatibility check: Kernel param "rmem_max" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      4194304                   4194304                   matched
Kernel param "rmem_max" check passed
Compatibility check: Kernel param "wmem_default" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      262144                    262144                    matched
Kernel param "wmem_default" check passed
Compatibility check: Kernel param "wmem_max" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      1048576                   1048576                   matched
Kernel param "wmem_max" check passed
Compatibility check: Kernel param "aio-max-nr" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      3145728                   3145728                   matched
Kernel param "aio-max-nr" check passed
Compatibility check: Package existence for "make" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      make-3.81-3.el5           make-3.81-3.el5           matched
Package existence for "make" check passed
Compatibility check: Package existence for "binutils" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      binutils-2.17.50.0.6-20.el5_8.3  binutils-2.17.50.0.6-20.el5_8.3  matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "gcc (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      gcc-4.1.2-52.el5_8.1 (x86_64)  gcc-4.1.2-52.el5_8.1 (x86_64)  matched
Package existence for "gcc (x86_64)" check passed
Compatibility check: Package existence for "libaio (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386)  libaio-0.3.106-5 (x86_64),libaio-0.3.106-5 (i386)  matched
Package existence for "libaio (x86_64)" check passed
Compatibility check: Package existence for "glibc (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      glibc-2.5-81.el5_8.7 (x86_64),glibc-2.5-81.el5_8.7 (i686)  glibc-2.5-81.el5_8.7 (x86_64),glibc-2.5-81.el5_8.7 (i686)  matched
Package existence for "glibc (x86_64)" check passed
Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386)  compat-libstdc++-33-3.2.3-61 (x86_64),compat-libstdc++-33-3.2.3-61 (i386)  matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed
Compatibility check: Package existence for "elfutils-libelf (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      elfutils-libelf-0.137-3.el5 (x86_64)  elfutils-libelf-0.137-3.el5 (x86_64)  matched
Package existence for "elfutils-libelf (x86_64)" check passed
Compatibility check: Package existence for "elfutils-libelf-devel" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.137-3.el5  matched
Package existence for "elfutils-libelf-devel" check passed
Compatibility check: Package existence for "glibc-common" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      glibc-common-2.5-81.el5_8.7  glibc-common-2.5-81.el5_8.7  matched
Package existence for "glibc-common" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      glibc-devel-2.5-81.el5_8.7 (x86_64),glibc-devel-2.5-81.el5_8.7 (i386)  glibc-devel-2.5-81.el5_8.7 (x86_64),glibc-devel-2.5-81.el5_8.7 (i386)  matched
Package existence for "glibc-devel (x86_64)" check passed
Compatibility check: Package existence for "glibc-headers" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      glibc-headers-2.5-81.el5_8.7  glibc-headers-2.5-81.el5_8.7  matched
Package existence for "glibc-headers" check passed
Compatibility check: Package existence for "gcc-c++ (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      gcc-c++-4.1.2-52.el5_8.1 (x86_64)  gcc-c++-4.1.2-52.el5_8.1 (x86_64)  matched
Package existence for "gcc-c++ (x86_64)" check passed
Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64)  libaio-devel-0.3.106-5 (i386),libaio-devel-0.3.106-5 (x86_64)  matched
Package existence for "libaio-devel (x86_64)" check passed
Compatibility check: Package existence for "libgcc (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      libgcc-4.1.2-52.el5_8.1 (x86_64),libgcc-4.1.2-52.el5_8.1 (i386)  libgcc-4.1.2-52.el5_8.1 (x86_64),libgcc-4.1.2-52.el5_8.1 (i386)  matched
Package existence for "libgcc (x86_64)" check passed
Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      libstdc++-4.1.2-52.el5_8.1 (x86_64),libstdc++-4.1.2-52.el5_8.1 (i386)  libstdc++-4.1.2-52.el5_8.1 (x86_64),libstdc++-4.1.2-52.el5_8.1 (i386)  matched
Package existence for "libstdc++ (x86_64)" check passed
Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      libstdc++-devel-4.1.2-52.el5_8.1 (x86_64)  libstdc++-devel-4.1.2-52.el5_8.1 (x86_64)  matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      sysstat-7.0.2-11.el5      sysstat-7.0.2-11.el5      matched
Package existence for "sysstat" check passed
Compatibility check: Package existence for "ksh" [reference node: test1002]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  test1001      ksh-20100621-5.el5_8.1    ksh-20100621-5.el5_8.1    matched
Package existence for "ksh" check passed
Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        test1001

You see that validation was unsuccessful.
This was due to memory en free disk space.
Server test1001is running and for the rest everything
matched so we can go on.

You can run the next script to see if we can add the node.

cluvfy stage -pre nodeadd -n test1001 -fixup -verbose

it must end this way

Pre-check for node addition was successful on all the nodes

Now where going to add the node to the cluster.
It begins with pre-checks if you want to skip this you can set the following parameter
$ export IGNORE_PREADDNODE_CHECKS=Y

$ ./addNode.sh -silent CLUSTER_NEW_NODES={test1001} CLUSTER_NEW_VIRTUAL_HOSTNAMES={test1001-vip}
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 3071 MB    Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

Performing tests to see whether nodes test1002,test1003,test1004 are available
............................................................... 100% Done.
..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/grid/11.2.0.2
   New Nodes
Space Requirements
   New Nodes
      test1001
         /u01: Required 13.08GB : Available 24.32GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.2.0
      Sun JDK 1.5.0.24.08
      Installer SDK Component 11.2.0.2.0
      Oracle One-Off Patch Installer 11.2.0.0.2
      Oracle Universal Installer 11.2.0.2.0
      Oracle USM Deconfiguration 11.2.0.2.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.3
      Oracle DBCA Deconfiguration 11.2.0.2.0
      Oracle RAC Deconfiguration 11.2.0.2.0
      Oracle Quality of Service Management (Server) 11.2.0.2.0
      Installation Plugin Files 11.2.0.2.0
      Universal Storage Manager Files 11.2.0.2.0
      Oracle Text Required Support Files 11.2.0.2.0
      Automatic Storage Management Assistant 11.2.0.2.0
      Oracle Database 11g Multimedia Files 11.2.0.2.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
      Oracle Globalization Support 11.2.0.2.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
      Oracle Core Required Support Files 11.2.0.2.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.2.0
      Oracle Quality of Service Management (Client) 11.2.0.2.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.2.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.2.0
      Oracle JDBC/OCI Instant Client 11.2.0.2.0
      Oracle Multimedia Client Option 11.2.0.2.0
      LDAP Required Support Files 11.2.0.2.0
      Character Set Migration Utility 11.2.0.2.0
      Perl Interpreter 5.10.0.0.1
      PL/SQL Embedded Gateway 11.2.0.2.0
      OLAP SQL Scripts 11.2.0.2.0
      Database SQL Scripts 11.2.0.2.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.2.0
      SQL*Plus Files for Instant Client 11.2.0.2.0
      Oracle Net Required Support Files 11.2.0.2.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.2.0
      RDBMS Required Support Files Runtime 11.2.0.2.0
      XML Parser for Java 11.2.0.2.0
      Oracle Security Developer Tools 11.2.0.2.0
      Oracle Wallet Manager 11.2.0.2.0
      Enterprise Manager plugin Common Files 11.2.0.2.0
      Platform Required Support Files 11.2.0.2.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.2.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.2.0
      Oracle Java Client 11.2.0.2.0
      Cluster Verification Utility Files 11.2.0.2.0
      Oracle Notification Service (eONS) 11.2.0.2.0
      Oracle LDAP administration 11.2.0.2.0
      Cluster Verification Utility Common Files 11.2.0.2.0
      Oracle Clusterware RDBMS Files 11.2.0.2.0
      Oracle Locale Builder 11.2.0.2.0
      Oracle Globalization Support 11.2.0.2.0
      Buildtools Common Files 11.2.0.2.0
      Oracle RAC Required Support Files-HAS 11.2.0.2.0
      SQL*Plus Required Support Files 11.2.0.2.0
      XDK Required Support Files 11.2.0.2.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.2.0
      Precompiler Required Support Files 11.2.0.2.0
      Installation Common Files 11.2.0.2.0
      Required Support Files 11.2.0.2.0
      Oracle JDBC/THIN Interfaces 11.2.0.2.0
      Oracle Multimedia Locator 11.2.0.2.0
      Oracle Multimedia 11.2.0.2.0
      HAS Common Files 11.2.0.2.0
      Assistant Common Files 11.2.0.2.0
      PL/SQL 11.2.0.2.0
      HAS Files for DB 11.2.0.2.0
      Oracle Recovery Manager 11.2.0.2.0
      Oracle Database Utilities 11.2.0.2.0
      Oracle Notification Service 11.2.0.2.0
      SQL*Plus 11.2.0.2.0
      Oracle Netca Client 11.2.0.2.0
      Oracle Net 11.2.0.2.0
      Oracle JVM 11.2.0.2.0
      Oracle Internet Directory Client 11.2.0.2.0
      Oracle Net Listener 11.2.0.2.0
      Cluster Ready Services Files 11.2.0.2.0
      Oracle Database 11g 11.2.0.2.0
-----------------------------------------------------------------------------

Instantiating scripts for add node (Friday, March 1, 2013 2:27:47 PM CET)
.                                                                 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Friday, March 1, 2013 2:27:55 PM CET)
..........................................................................
Instantiation of add node scripts complete
Copying to remote nodes (Friday, March 1, 2013 2:27:55 PM CET)
...............................................................................................                                 96% Done.
Home copied to new nodes
Saving inventory on nodes (Friday, March 1, 2013 2:38:18 PM CET)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/grid/11.2.0.2/root.sh #On nodes test1001
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/grid/11.2.0.2 was successful.
Please check '/tmp/silentInstall.log' for more details.

after you run root.sh on test1001 you finished.

$ crsstat
Name                                          Type            Target     State      Host
----------------------------------------------------------------------------------------------
ora.DGGACFS1.dg                               application     ONLINE     ONLINE     test1001
ora.DGGRID.dg                                 application     ONLINE     ONLINE     test1001
ora.LISTENER.lsnr                             application     ONLINE     ONLINE     test1001
ora.LISTENER_SCAN1.lsnr                       application     OFFLINE    OFFLINE
ora.asm                                       application     ONLINE     ONLINE     test1001
ora.cvu                                       application     ONLINE     ONLINE     test1004
ora.gsd                                       application     OFFLINE    OFFLINE
ora.net1.network                              application     ONLINE     ONLINE     test1001
ora.oc4j                                      application     OFFLINE    OFFLINE
ora.test1001.ASM1.asm                         application     ONLINE     ONLINE     test1001
ora.test1001.LISTENER_OESV7802.lsnr           application     ONLINE     ONLINE     test1001
ora.test1001.gsd                              application     OFFLINE    OFFLINE
ora.test1001.ons                              application     ONLINE     ONLINE     test1001
ora.test1001.vip                              application     ONLINE     ONLINE     test1001
ora.test1002.ASM2.asm                         application     ONLINE     ONLINE     test1002
ora.test1002.LISTENER_OESV7803.lsnr           application     ONLINE     ONLINE     test1002
ora.test1002.gsd                              application     OFFLINE    OFFLINE
ora.test1002.ons                              application     ONLINE     ONLINE     test1002
ora.test1002.vip                              application     ONLINE     ONLINE     test1002
ora.test1003.ASM3.asm                         application     ONLINE     ONLINE     test1003
ora.test1003.LISTENER_OESV7902.lsnr           application     ONLINE     ONLINE     test1003
ora.test1003.gsd                              application     OFFLINE    OFFLINE
ora.test1003.ons                              application     ONLINE     ONLINE     test1003
Ora.test1003.vip                              application     ONLINE     ONLINE     test1003
ora.test1004.ASM4.asm                         application     ONLINE     ONLINE     test1004
ora.test1004.LISTENER_OESV7903.lsnr           application     ONLINE     ONLINE     test1004
ora.test1004.gsd                              application     OFFLINE    OFFLINE
ora.test1004.ons                              application     ONLINE     ONLINE     test1004
ora.test1004.vip                              application     ONLINE     ONLINE     test1004
ora.ons                                       application     ONLINE     ONLINE     test1001
ora.registry.acfs                             application     ONLINE     ONLINE     test1001
ora.scan1.vip                                 application     OFFLINE    OFFLINE

Thats it.

Delete node from rac cluster

I had created a four node rac cluster.
 Everything was running fine until i got a call that one of the node's
was down. After checking it was the u01 filesystem that was corrupt.
There was no other solution then delete this node from the cluster
and then recreate the node again and add the node to the cluster.

First we have to delete the node from the cluster.

$ ./olsnodes -s -n
test1001         1       Inactive
test1002         2       Active
test1003         3       Active
test1004         4       Active

As you see test1001 is inactive which is correct as it's down.

Now we delete the node from clusterware configuration
as user root from one of the remaining nodes

$ ./crsctl delete node -n test1001
CRS-4661: Node test1001 successfully deleted.

now we have to do 1 thing and that is update the inventory.
As user Oracle

./runInstaller -updateNodelist ORACLE_HOME=/u01/app/grid/11.2.0.2 "CLUSTER_NODES={test1002,test1003,test1004}" CRS=true
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 3071 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
UpdateNodeList’ was successful.

the parameter cluster_nodes has the remaining nodes in it.

$ ./olsnodes -s -n
test1002        2       Active
test1003        3       Active
test1004        4       Active

That's it

Reminder.
This is a short version off deleting a node from cluster
as in my case the node didn't exsist anymore.
If the node exsist you have to do some more steps.
A good blog for this is http://blog.grid-it.nl/index.php/2011/04/05/deleting-a-node-from-the-cluster-in-11gr2/