Smaller or bigger?


A few minutes back, downloaded WordPress Android to try it out on my mobile. When android application are being reviewed one thing we must understand that these are very usefull and tiny application. And they are almost doing the same. The disk and the memory footprint used by these programs are one to tens of hundreds of their big brothers being used in PCs or notebooks. And they are just not too slow (although the oldest computing devices were too fast compare to computing speed of maximum human being …..So I still do not understand what is too slow as far as it is not games)…….

Real eager to see those smalls are conquring the PC and Laptops….


Some astonished reading…

Although I thought, I would write only my experiences working with SAP, Oracle etc, I recently saw this eye catching article and the related debates…But definitely this worth reading……

Sayonara Sony: How Industrial, MBA-Style Leadership Killed a Once Great Company

the no of call-outs coming out actually poking a thought how much does the MBA thought and Communism defers?

What do You think?

This is really interesting. Lots of clue on the subject helping conflict management on hosting application service…..
I and reader like me will really appreciate you if you show the script by which device names are relaced by the partition/logical volume name (more generic understanding).
Apart from this here is another puzzle..
Until recently I was wondering with a simple question but it seems the answer is pretty complex. The Question is something like…..
An application administrator before implementing the system goes to storage administrator (Lets assume it is real big storage …very big…) to allocate a chunk of TB (space /Lun/ Devices or) for his application to run. Now the application administrator feels doubt that the chunk allocated to him does not satisfy the IOPS hunger of the application. So he returns back to storage administrator and ask …Tell me what maximum IOPS possible in the storage chunks you allocated to me……The Storage administrator remain speech less…..How can he calculate the IOPS maximum possible to the storage chunks which is spread over multiple disk drives, some partially and some fully……..

Can you throw some light on it?


Dirty Cache

Does this story sound familiar?

The end users of a database application start complaining about poor system response and long running batch jobs. The DBA team starts investigating the problem. DBA’s look at their database tools such as Enterprise Manager, Automatic Workload Repository (AWR) reports, etc. They find that storage I/O response times are too high (such as an average of 50 milliseconds or more) and involve the storage team to resolve it.

The storage guys, in turn, look at their tooling – in case of EMC this could be Navisphere Analyzer, Symmetrix Performance Analyzer (SPA) or similar tools. They find completely normal response times – less than 10 milliseconds average.

The users still complain but the storage and database administrators point to each other to resolve the problem. There is no real progress in solving the problem though.

Two Way Communications

View original post 496 more words

Surprise learning – Oracle SCN no…..


, ,

Just a few minutes back I got this link shared by one my my most admired senior,,,,Yes, this is a problem…But will this start ringing the drams (as we indians do during immerson of Idols of gods and godesses)……
And force use the RMAN or equivalent things?

read this…This goes like  below

A design decision made by Oracle architects long ago may have painted some of Oracle’s largest customers into a corner. Patches have arrived, but how much will they correct?


Small is Beautiful ???


, , , , , , ,

This is something is my favourite dialogue when it is the case of the performance issue of a business application whether it is SAP or it is some other ERP or tailor made one. Every business application is supported by a database storing business transactions as records and a management system/engine to run (retrieve, add, modify or delete information) and manage the database whether relationally or else type.

I still remember those days, when we use to challenge each other with writing smaller better executables which gives less memory dumps. Those days 4 MB RAM in one of my friend’s computer was something we all do envy. We used 360KB floppies to copy programs and games written from one machine to another to try out. Gone those days, everything is now bigger.

But the big the size of head, the intensified is the headache, need bigger doses to come round and carry the risk of the more side effects. Big data is something equivalent to a overloaded truck on the road, unsafe for people (driver as well as those others travelling in the road) and harmful for road as well as the truck itself.

I had recently came across a situation where a server that is running a SAP BW system is temporarily facing storage crunch and ping pong of communication happening between storage admin, DBAs, Basis and BW consultants….This happens, whatever strong is the planning.

Now for any application, there are majorly three parts e.g. binaries, data container and the interface engine.  And the application database is the part, which always have growth. As obviously in most my discussion SAP and Oracle comes into the picture, I will be referring them also here in this discussion.

So what are causing an SAP OLTP system to grow bigger? And what are the different approaches to gain control over them…….?

Ideally following things makes an SAP database grow bigger

  • Logs
    • System Logs
    • Audit trails and logs
  • Business Processes configuration
    • Data accumulated but not used.
    • Too much detail information is collected.
  • Trying to provide intelligence in the scope of transactional reporting (this is nothing but fooling your customer as well as yourself) resulting in
    • Rampart Creation of indexes to reduce the execution time of so called intelligence reports.
    • Creation of duplicate record containers.
  • Storing of duplicate data in database because of some customized need (i.e. trying to provide intelligence in the scope of transactional reporting- this is nothing but fooling your customer as well as yourself).
  • Database growing older and became porous.
  • No policy on information life cycle.


And the ideal will methodology should be a combination of the following.


  • Review configuration thus helping data avoidance
  • Define data life – Create a policy for data archival, deletion.. The frequencies….
  • Database De-Fragmentation
  • Stop building business intelligence in an OLTP system – especially when the more open type of parameter screen is used e.g. ABAP reports… This gradually forces you to create a more index to satisfy the execution speed on the actual OLTP tables which is suicidal….i.e. remember the more you create index, the more you increase the transaction time.
  • Plan for a separate hardware, and segregate data and history data and build up intelligence.
  • Don’t forget to clean up the mess (created by the approach of building business intelligence in OLTP) you made ….My experience is people forget it …But this is one of the most important. And that is why I kept it in separate bullet point.

Remember; don’t let your application logic just to satisfy business logic, but to satisfy it better and proper. Otherwise it is actually hampering business by increase of operation time, there by reducing organisation wide operational efficiency.

What I had in my mind is to create a small document on this topic in a scenario of SAP and Oracle, which can be a ready reckoner at least to start the analysis….