Home Applications IT’s Next Headache: Dealing with Software Sprawl

IT’s Next Headache: Dealing with Software Sprawl

0
0

Written by: Lisa Hammond, Centrix Software

IT is a very cyclical business. Just as tomorrow’s next big thing becomes yesterday’s technology, so the problems that IT encounters continue to have a familiar feel to them. How we deal with the recurring IT management and deployment issues is important. Instead of continuing to follow conventional wisdom around areas such as application delivery and desktop management, we have to break the cycle and rethink our approach for the future of IT.
 
A classic case of old technology meeting a new problem is virtualization: taking an idea that has its roots in the mainframe and applying it to the problem of x86-based servers, which had over the years mushroomed to be a huge part of IT investment and planning. Instead of running lots of individual machines that were idle for a large amount of the time, virtualization allowed more workloads to be run on a smaller number of physical boxes. This not only reduced spending on hardware, but also brought other benefits such as better levels of availability for applications and easier disaster recovery.
 
However, virtualization has its own problems: because it makes deploying applications easier and faster, the number of virtual machines within a company’s infrastructure can grow extremely quickly. This can lead to the problem of software sprawl, where an organization has too many application instances deployed.
 
This is a big problem within virtualization deployments, as IT often does not figure this into the forecasting and costing of projects. With virtualization being an easy option, it is possible to use up the allotted resources far quicker than was anticipated, skewing any return on investment calculations and either making the ROI negligible, or potentially costing the company more in order to achieve the estimated benefits.
 
This can cover the whole of an organization’s IT, not just its virtualized resources. Too often, applications can be bought and paid for, but not used. The sheer volume of software that sits on the desktop and is not used can be shocking, with anything up to 40 per cent of applications not being used at all. This represents a massive headache for IT, as these resources demand management and patching, but deliver nothing in return.
 
In this environment, software sprawl and virtualization represents a good example of a useful technology being subverted by old IT thinking. A lot of this is due to the management model that is employed, based on initial estimations and capital outlay rather than “real world data” from the organization. This is a very static approach that does not reflect the variable nature of most businesses, as users’ requirements change or different trends start to develop mid-way through a project.
 
While most large organizations may have asset management or compliance solutions in place, these also suffer from thinking too statically about the resources that they cover. The sheer variety of ways that an application can get to a user, from locally installed applications on the desktop through to centrally hosted client-server apps, web-based services and virtualized applications, also makes this approach to planning more difficult.
 
What is required instead is the ability to model how the company really works with its IT assets, from the end-user population to the level of business operations and into the IT infrastructure itself. This approach gives a better overview of how assets are actually being used, rather than how IT thinks they will be taken up.
 
Taken further, this rolling approach to metering and monitoring what IT resources are used can show up trends in user behavior that can lead to cost savings: if an application is being used by fewer members of staff than was originally planned, then the support and maintenance costs can be reduced or a different license method can be brought in. This can free up budget, and make existing IT resources work much harder over time.
 
By looking at user behavior and how they are currently working, IT can decide where resources can be best delivered from and what platforms should be in place to support the organization going forward – rather than sticking to the same approach that the IT market has developed in the past. This model, based on IT intelligence and monitoring real world usage, can deliver much better results to the business over the longer term. With the advent of virtualization and cloud, IT can deliver greater flexibility and provide a way of avoiding the mistakes of the past.
 
Note: for information about the authors company, Centrix, please visit their website here: http://www.centrixsoftware.com/
 
  

 

More Resources:

Categories:
DABCC DABCC.com, the world leader in sharing the finest Virtualization & Cloud news and support resources. #Citrix, #VMware, #Microsoft, #Mobility and much more! Brought to you by @douglasabrown & team!
| LATEST RESOURCES

White Papers

    Reinventing High-Performance Storage for Splunk White Paper

    Forty-two percent of all data will qualify as “machine generated” by 2020, according to IDC. This data is generated almost constantly, in copious amounts – in forms such as application logs, sensor data, business process logs and message queues – and it holds a potential gold mine for CIOs and business leaders. To keep up […]

    Downloads

      Download Free Azure Performance Monitoring Tool

      Azure cloud monitoring made easy! This cloud monitoring tool monitors the performance metrics of Cloud Services running on Windows Azure Environment. It monitors the CPU, memory utilization of web and worker roles for any number of instances. Select Available Deployment Id View the Deployment Id’s linked to the specified storage Monitor different deplyment Id’s of […]

      On-Demand Webinars

        3 Ways to Supercharge Performance in Your Modern Virtual Workspace – On-Demand Webinar

        3 Ways to Supercharge Performance in Your Modern Virtual Workspace – Get Tips for leveraging flash, SDS and GPU! Wondering how you can give your virtual desktops, applications and workstations a performance boost? With high performance flash storage and discrete graphics (GPU) being used in every modern PC and workstation, why settle for below average […]


        Close