Beiträge getaggt mit exadata

No DISK_REPAIR_TIME on Exadata Cells

Starting with version 11.2.1.3.1, Exadata Cells use Pro-Active Disk Quarantine to override any setting of DISK_REPAIR_TIME. This and some other topics related to ASM mirroring on Exadata Storage Servers is explained in a recent posting of my dear colleage Joel Goodman. Even if you are familiar with ASM on non-Exadata Environments, you may not have used ASM redundancy yet and therefore benefit from his explanations about it.

Addendum: Maybe the headline is a little misleading as I just got aware. DISK_REPAIR_TIME set on an ASM Diskgroup that is built upon Exadata Storage Cells is still in use and valid. It is just not referring to the Disk level (Griddisk on Exadata) but instead on the Cell level.

In other words: If a physical disk inside a Cell gets damaged, the Griddisks built upon this damaged disk get dropped from the ASM Diskgroups immediately without waiting for DISK_REPAIR_TIME, due to Pro-Active Disk Quarantine. But if a whole Cell goes offline (Reboot of that Storage Server, for example), the dependant ASM disks get not dropped from the respective Diskgroups for the duration of DISK_REPAIR_TIME.

, ,

6 Kommentare

CELL_PARTITION_LARGE_EXTENTS now obsolete

During the Exadata course that I am just delivering in Munich, I noticed that the fairly new parameter CELL_PARTITION_LARGE_EXTENTS is already obsolete now:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> alter system set cell_partition_large_extents=true;
alter system set cell_partition_large_extents=true
*
ERROR at line 1:
ORA-25138: CELL_PARTITION_LARGE_EXTENTS initialization parameter has been made
obsolete

This parameter was introduced in 11.2.0.1 especially for Exadata Database Machine because the Allocation Unit Size (AU_SIZE) for Diskgroups built upon Exadata Cells is recommended with 4 MB. Large Segments should therefore use a multiple of 4 MB already for their initial extents. Although the parameter was made obsolete, the effect that was achievable with it is still present:

SQL> create table t (n number) partition by range (n) (partition p1 values less than (2));

Table created.

SQL> insert into t values (1);

1 row created.

SQL> select bytes/1024/1024 as mb from user_extents where segment_name='T';

 MB
----------
 8

I inserted before checking USER_EXTENTS because of the 11g New Feature deferred segment creation:

SQL> drop table t purge;

Table dropped.

SQL> create table t (n number);

Table created.

SQL> select bytes/1024/1024 as mb from user_extents where segment_name='T';

no rows selected

SQL> insert into t values (1);

1 row created.

SQL> select bytes/1024/1024 as mb from user_extents where segment_name='T';

 MB
----------
 .0625

Notice that only partitioned tables are affected by the 8 MB initial extent behavior. The new hidden parameter _PARTITION_LARGE_EXTENTS (defaults to true!) is now responsible for that:

SQL> alter session set "_partition_large_extents"=false;

Session altered.

SQL> drop table t purge;

Table dropped.

SQL> create table t (n number) partition by range (n) (partition p1 values less than (2));

Table created.

SQL> insert into t values (1);

1 row created.

SQL> select bytes/1024/1024 as mb from user_extents where segment_name='T';

 MB
----------
 .0625

Notice that the setting of CELL_PARTITION_LARGE_EXTENTS with alter session is silently overridden by the underscore parameter:

SQL> drop table t purge;

Table dropped.

SQL> alter session set cell_partition_large_extents=true;

Session altered.

SQL> create table t (n number) partition by range (n) (partition p1 values less than (2));

Table created.

SQL> insert into t values (1);

1 row created.

SQL> select bytes/1024/1024 as mb from user_extents where segment_name='T';

 MB
----------
 .0625

The parameter setting of the underscore parameter was still false.

SQL> drop table t purge;

Table dropped.

SQL> alter session set "_partition_large_extents"=true;

Session altered.

SQL> alter session set cell_partition_large_extents=false;

Session altered.

SQL> create table t (n number) partition by range (n) (partition p1 values less than (2));

Table created.

SQL> insert into t values (1);

1 row created.

SQL> select bytes/1024/1024 as mb from user_extents where segment_name='T';

 MB
----------
 8

Conclusion: With 11.2.0.2, partitioned tables get initial extents of 8 MB in size, which is particular useful in Exadata Environments where the ASM AU_SIZE will be 4 MB. But also ordinary Databases are affected – which is probably a good thing if we assume that partitioned tables will be large in size anyway and will therefore benefit from a large initial extent size as well.

Addendum: During my present Exadata course (03-JUN-2012), I saw a similar parameter for partitioned indexes also: _INDEX_PARTITION_LARGE_EXTENTS defaults to FALSE, though. Brief test:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> create table parti (n number) partition by range (n) (partition p1 values less than (2));

Table created.

SQL> insert into parti values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> create index parti_idx on parti(n) local;

Index created.

SQL> select bytes from user_extents where segment_name='PARTI_IDX';

     BYTES
----------
     65536

SQL> drop index parti_idx;

Index dropped.

SQL> alter session set "_index_partition_large_extents"=true;

Session altered.

SQL> create index parti_idx on parti(n) local;

Index created.

SQL> select bytes from user_extents where segment_name='PARTI_IDX';

     BYTES
----------
   8388608

So this parameter gives us also 8M sized extents for partitioned indexes, but not by default.

, ,

5 Kommentare

Exadata Part VI: Cell Administration with dcli

An Exadata Database Machine X2-2 Full Rack comes with 8 Database Nodes and 14 Storage Servers (Cells). The Cell Administration can be done on the command line as user celladmin with CellCLI, as shown in previous postings already, but it would be an annoying task to do all commands 14 times. Therefore, dcli was introduced to enable us to control multiple Cells with a single command. It comes with a nice help switch:

[celladmin@cell1 ~]$ dcli -h

Distributed Shell for Oracle Storage

This script executes commands on multiple cells in parallel threads.
The cells are referenced by their domain name or ip address.
Local files can be copied to cells and executed on cells.
This tool does not support interactive sessions with host applications.
Use of this tool assumes ssh is running on local host and cells.
The -k option should be used initially to perform key exchange with
cells.  User may be prompted to acknowledge cell authenticity, and
may be prompted for the remote user password.  This -k step is serialized
to prevent overlayed prompts.  After -k option is used once, then
subsequent commands to the same cells do not require -k and will not require
passwords for that user from the host.
Command output (stdout and stderr) is collected and displayed after the
copy and command execution has finished on all cells.
Options allow this command output to be abbreviated.

Return values:
 0 -- file or command was copied and executed successfully on all cells
 1 -- one or more cells could not be reached or remote execution
 returned non-zero status.
 2 -- An error prevented any command execution

Examples:
 dcli -g mycells -k
 dcli -c stsd2s2,stsd2s3 vmstat
 dcli -g mycells cellcli -e alter iormplan active
 dcli -g mycells -x reConfig.scl

usage: dcli [options] [command]

options:
 --version           show program's version number and exit
 -c CELLS            comma-separated list of cells
 -d DESTFILE         destination directory or file
 -f FILE             file to be copied
 -g GROUPFILE        file containing list of cells
 -h, --help          show help message and exit
 -k                  push ssh key to cell's authorized_keys file
 -l USERID           user to login as on remote cells (default: celladmin)
 -n                  abbreviate non-error output
 -r REGEXP           abbreviate output lines matching a regular expression
 -s SSHOPTIONS       string of options passed through to ssh
 --scp=SCPOPTIONS    string of options passed through to scp if different
 from sshoptions
 --serial            serialize execution over the cells
 -t                  list target cells
 --unkey             drop keys from target cells' authorized_keys file
 -v                  print extra messages to stdout
 --vmstat=VMSTATOPS  vmstat command options
 -x EXECFILE         file to be copied and executed

We will see some of the most useful (in my view) commands in the following. First, we need to setup user equivalence for the celladmin user to all cells. If the directory .ssh does not exist yet, we need to create it first on all the cells. On my demo machine, I have only 2 cells:

[celladmin@cell1 ~]$ mkdir ~/.ssh
[celladmin@cell1 ~]$ chmod 700 ~/.ssh
[celladmin@cell2 ~]$ mkdir ~/.ssh
[celladmin@cell2 ~]$ chmod 700 ~/.ssh

Very useful is the usage of the -g switch that points to a text file containing all Cells resp. a group of Cells (IP aliases or IP Adresses):

[celladmin@cell1 ~]$ cat cells.txt
cell1
cell2

The establishment of user equivalence is made easy by the -k switch:

[celladmin@cell1 ~]$ dcli -k -g cells.txt
Error: Neither RSA nor DSA keys have been generated for current user.
Run 'ssh-keygen -t dsa' to generate an ssh key pair.

Well, I just do what I’ve been told:

[celladmin@cell1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/celladmin/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/celladmin/.ssh/id_dsa.
Your public key has been saved in /home/celladmin/.ssh/id_dsa.pub.
The key fingerprint is:
e6:25:1f:2f:22:a9:5c:ec:e4:98:64:67:91:60:ce:9d celladmin@cell1.example.com

Second try:

[celladmin@cell1 ~]$ dcli -k -g cells.txt
The authenticity of host 'cell1 (127.0.0.1)' can't be established.
RSA key fingerprint is 99:86:a5:3f:f1:98:75:53:e8:92:fc:7d:fd:4d:aa:45.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cell1' (RSA) to the list of known hosts.
celladmin@cell1's password:
The authenticity of host 'cell2 (192.168.56.103)' can't be established.
RSA key fingerprint is 99:86:a5:3f:f1:98:75:53:e8:92:fc:7d:fd:4d:aa:45.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cell2,192.168.56.103' (RSA) to the list of known hosts.
celladmin@cell2's password:
cell1: ssh key added
cell2: ssh key added

Now just testing the user equivalence:

[celladmin@cell1 ~]$ ssh cell1 date
Tue Mar 22 03:21:48 PDT 2011
[celladmin@cell1 ~]$ ssh cell2 date
Tue Mar 22 03:21:53 PDT 2011

Works! Now wasn’t that easy? Our first Multi-Cell-Command will be in the most simple form:

[celladmin@cell1 ~]$ dcli -g cells.txt "cellcli -e list cell"
cell1: cell1     online
cell2: cell2     online

The next command will validate the configuration of all the Cells. Notice the „Success“ message (Return Code 0) each one returns:

[celladmin@cell1 ~]$ dcli -g cells.txt "cellcli -e alter cell validate configuration"
cell1: Cell cell1 successfully altered
cell2: Cell cell2 successfully altered

We can suppress those normal messages because we may only want to see if something is not normal. That does the -n switch:

[celladmin@cell1 ~]$ dcli -g cells.txt -n "cellcli -e alter cell validate configuration"
OK: ['cell1', 'cell2']

The -x switch will copy a text file to all the cells and execute it. The suffix need to be .scl (small L) to be accepted as a CellCLI Batch file. Otherwise it is assumed that it contains OS commands. For some odd reason, we need to make it OS executable in both cases:

[celladmin@cell1 ~]$ cat mycommands.scl
list cell
list flashcache
[celladmin@cell1 ~]$ dcli -g cells.txt -x mycommands.scl
Error: Exec file does not have owner execute permissions
[celladmin@cell1 ~]$ chmod 770 *.scl
[celladmin@cell1 ~]$ dcli -g cells.txt -x mycommands.scl
cell1: cell1     online
cell1: cell1_FLASHCACHE  normal
cell2: cell2     online
cell2: cell2_FLASHCACHE  normal

The -r switch suppresses output lines with regular expressions. Suppose I want to suppress the griddisks that belong to diskgroup reco:

[celladmin@cell1 ~]$ dcli -g cells.txt -r "reco" "cellcli -e list griddisk"
reco: ['cell1', 'cell2']
cell1: data_CD_disk01_cell1      active
cell1: data_CD_disk02_cell1      active
cell1: data_CD_disk03_cell1      active
cell1: data_CD_disk04_cell1      active
cell1: data_CD_disk05_cell1      active
cell1: data_CD_disk06_cell1      active
cell1: data_CD_disk07_cell1      active
cell1: data_CD_disk08_cell1      active
cell1: data_CD_disk09_cell1      active
cell1: data_CD_disk10_cell1      active
cell1: data_CD_disk11_cell1      active
cell1: data_CD_disk12_cell1      active
cell2: data_CD_disk01_cell2      active
cell2: data_CD_disk02_cell2      active
cell2: data_CD_disk03_cell2      active
cell2: data_CD_disk04_cell2      active
cell2: data_CD_disk05_cell2      active
cell2: data_CD_disk06_cell2      active
cell2: data_CD_disk07_cell2      active
cell2: data_CD_disk08_cell2      active
cell2: data_CD_disk09_cell2      active
cell2: data_CD_disk10_cell2      active
cell2: data_CD_disk11_cell2      active
cell2: data_CD_disk12_cell2      active

The –vmstat switch lets us vmstat all the cells with one command:

[celladmin@cell1 ~]$ dcli -g cells.txt --vmstat="-a 3 2"
 procs -----------memory---------- ---swap-- -----io----  --system-- -----cpu------
04:11:30: r  b   swpd   free  inact active   si   so    bi    bo    in    cs us sy id wa st
 cell1: 0  0      0 430620 326880 698244    0    0    73    50  1062   811  1 14 83  1  0
 cell2: 0  0      0 437336 326988 691956    0    0    65    48  1056   680  1 14 84  1  0
 Minimum: 0  0      0 430620 326880 691956    0    0    65    48  1056   680  1 14 83  1  0
 Maximum: 0  0      0 437336 326988 698244    0    0    73    50  1062   811  1 14 84  1  0
 Average: 0  0      0 433978 326934 695100    0    0    69    49  1059   745  1 14 83  1  0
 procs -----------memory---------- ---swap-- -----io----  --system-- -----cpu------
04:11:34: r  b   swpd   free  inact active   si   so    bi    bo    in    cs us sy id wa st
 cell1: 0  0      0 430620 326900 698252    0    0     0    19  1028 10452  0 16 83  0  0
 cell2: 0  0      0 437336 327008 691972    0    0     0    19  1027 11320  0  7 93  0  0
 Minimum: 0  0      0 430620 326900 691972    0    0     0    19  1027 10452  0  7 83  0  0
 Maximum: 0  0      0 437336 327008 698252    0    0     0    19  1028 11320  0 16 93  0  0
 Average: 0  0      0 433978 326954 695112    0    0     0    19  1027 10886  0 11 88  0  0

Conclusion: With dcli, we have a powerful utility to run commands on multiple Cells without much effort.

,

16 Kommentare