How not to do Cisco, VMware and NetApp monitoring
Under the Imagine Virtually Anything alliance, Cisco, NetApp and VMWare have teamed up to deliver a shared virtualized server/network/storage infrastructure that can securely host multiple “tenants.” This seems like A Good Thing, for all the usual reasons virtualization is good (lower costs, improved energy/space efficiency, faster deployments, etc). Yet they seem to have forgotten that once your infrastructure is virtualized, problems can be harder to diagnose, can have a broader impact, and are generally more complex. Meaning monitoring that can span the whole datacenter (PDUs, storage, network, SAN, virtualization platform, OS, applications) becomes that much more critical if you want to have any chance of preventing issues, or resolving them with a decent time-to-resolution.
That’s not to say the alliance forgot monitoring – if you read the “Enhanced Secure Multi-Tenancy Design Guide” from the web site, there’s lots of mention of monitoring. Cisco has monitoring. Vmware has monitoring. NetApp has monitoring. As a matter of fact, the Design guide recommends no fewer than 14 disparate monitoring systems (Cisco Fabric Manager, NetApp Operations Manager, VMware vShield manager, etc) in order to provide visibility into the infrastructure. And that’s without a monitoring system that can monitor the applications – the databases, web servers, application servers and so on – that are the whole point of the virtual infrastructure anyway.
I can think of few things more likely to decrease availability than having to diagnose an issue by correlating data amongst more than 14 separate monitoring systems. For one thing, no single person is even going to know how to use all of the systems.
To learn more and to read the entire article at its source, please refer to the following page, How not to do Cisco, VMware and NetApp monitoring