Home » RDBMS Server » Performance Tuning » Too much memory consumed in SGA
Too much memory consumed in SGA [message #266351] Mon, 10 September 2007 11:08 Go to next message
rrodriguez56
Messages: 2
Registered: September 2007
Junior Member
Hello. I have an Oracle 10g RAC database with 2 nodes running on IBM AIX 5L (P570s) with the following configuration:
1. (4) 2W 1.65GHZ CPU (RISC)
2. 8 GB RAM
3. IBM AIX 5L 64-Bit

SGA in each node is configured at 2GB of RAM, since we plan to run another clustered database in these machines and memory allocated to SGA should not exceed 50% of server memory.

I have been running benchmark tests against my database using Quest Benchmark Factory initially against each node separately. I ran tests with their standard mix of TPC-C transactions witin a 5 GB database. Each node yielded more or less the same results in terms of Transactions per Second (TPS):
1. User Load = 400 --> 20.78 TPS
2. User Load = 600 --> 31.24 TPS
3. User Load = 800 --> 00.96 TPS

The poor results with a user load of 800 occurred because the SGA maxed out its available memory and the I/O rate shut up due to the use of paging space to compensate. I was a bit surprised by the results with a user load of 800, since Dell found its sweetspot with Benchmark Factory in a PowerEdge Server with the following configuration that has much less power at 500 with the RAC overhead:
1. Two Intel® Xeon® processors at 3.8 GHz (CISC)
2. 4GB RAM - 1.5 GB allocated to SGA
3. Red Hat Linux AS 4 QU1 32-bit

More details about the benchmark tests performed by Dell on its PowerEdge Servers can be seen in the following article:
http://www.quest.com/Quest_Site_Assets/News/Dell-Quest.pdf

We use ASM just like Dell did, and I think that although our SAN is IBM, the configuration and performance is equivalent to that of the DELL/EMC SAN storage solution used by DELL in its benchmark tests.

I think I can get much more out of each node, but then again I might be wrong. Any advise on what can be done to improve performance is greatly appreciated.
Re: Too much memory consumed in SGA [message #266845 is a reply to message #266351] Tue, 11 September 2007 14:56 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
Not that I can really help you, because I haven't worked with RAC for real yet, and have never touched AIX.

But are you sure it is solely an SGA thing? What about PGA? What is your PGA_AGG_TARGET (assuming auto workarea policy). With as much RAM as you have, I'd think it should be quite high. Or is the dependency on PGA that much less in RAC?

And I thought there was a 1.8 GIG SGA_MAX_SIZE limit, but that might be an older Oracle version than 10g.

Also, why did your TPC actually go UP, from 20 to 30, when you increased the number of users from 400 to 600?

And keep in mind Dell likely tuned the heck out of their particular Red Hat OS version for their test, probably built a custom kernal (or whatever you call it when you set OS specific build flags like semaphores and compile).
Re: Too much memory consumed in SGA [message #267142 is a reply to message #266845] Wed, 12 September 2007 10:19 Go to previous message
rrodriguez56
Messages: 2
Registered: September 2007
Junior Member
Hello, and thank you for responding. Based on my reaserch, RAC requires about 15% of resources in an Oracle 10g clustered database. The dependency on PGA is not less in RAC than in a single instance Oracle 10 database.
PGA: 251MB in Node1 and 214MB in Node2
PGA_AGG_TARGET = 604MB in each node

There were no abnormal PGA consumption during benchmark tests in either node.

The SGA_MAX)SIZE limit is well above 2GB of RAM for Oracle 10g running on AIX 64-bit OS.

TPS was at 20 TPS avg. when battery of TPC-C transactions were executed in Benchmark Factory with a a user load of 400. TPS went up from 30 TPS avg. when battery of TPC-C transactions were executed in Benchmark Factory with a a user load of 600.
Previous Topic: SMON TOP PROCESS
Next Topic: Sql slow query plan
Goto Forum:
  


Current Time: Fri May 31 19:52:26 CDT 2024