Article

The Five "Ps" of On-Prem Costs When Considering a Move to Cloud

When justifying a move to the cloud, one of the more challenging areas is quantifying on-premises costs. These “5 Ps” of quantifying on-premises costs will help.

19 octobre 2021 5 min de lecture
On-Prem Costs to Consider When Moving to the Cloud
Moving enterprise systems to the Cloud can be a daunting challenge for companies of any size:  choosing a provider, optimizing the architecture, ensuring business continuity, and financial justification all play a major role in the decision (and a lot of other considerations). One of the more challenging areas is quantifying on-premises costs, especially when trying to justify a move to the cloud. Tangible costs can be easily recognized through financials, while intangible costs are more elusive (and sometimes emotional). Intangible costs often manifest themselves in the form of lost opportunity, or activities which slow or stifle new business growth. If you’re trying to understand the full costs involved in running software and hardware to drive innovation and transformation within your company, here are the “5 Ps” of quantifying on-premises costs you should consider.
 
 
The 5 Ps of Tangible Costs:
 
  1. Power – Data centers need electricity for computers, air conditioning and lights.  Reliable data centers have backup generators to ensure system availability during power outages. And, depending on a company’s business continuity strategy, additional sites for disaster recovery and high availability are needed. All of which need power.
  2. Place – Organizations must buy, or lease, raised floor or rack space for all environments: development, test, QA, production, and their ever-growing analytic discovery environments. Again, DR and high-availability sites must also be considered, including fire detection and prevention capabilities. 
  3. People – IT resources are needed to monitor and maintain existing hardware, network, software, and security infrastructure. During the annual planning cycle, IT must also plan and provision new hardware, software, and facility growth. To effectively manage these complex environments, these highly skilled resources require continuous education and training to keep up with the latest technology advancements and industry best practices. 
  4. Payment – On premises typically follows a CapEx model, requiring payment up front.    In contrast, the OpEx model offers pay-as-you-go options to match different consumption appetites, from all-you-can-eat to pay-by-the-drink. In some cases, on-premises resources can be contracted as OpEx, (i.e., rental models). Regardless, these resources must still be overprovisioned for future growth and all other tangible and intangible costs would still apply.
  5. Performance – Organizations must consider daily, monthly, and seasonal processing, and plan for their peak needs. Overprovisioning is necessary to address current, organic growth, and new development processing needs and is reflected in hardware, software, and network purchases. Even facilities are over provisioned to plan for future growth.  Unfortunately, when unforeseen changes to the economy or business priorities shift, you can’t give it back. Some vendors offer elastic capabilities on-prem, a version of “only pay for what you use.” It can help address payment and performance costs, but bottom line, it requires a system to be overprovisioned and ultimately has limited scalability.
 
The 5 Ps of Intangible Costs:
 
  1. Provisioning Timeline – Traditional on-premises acquisition of new technology typically involves an RFI/RFP process to help mitigate buy-before-you-try. Acquiring new or extending existing infrastructure takes time to be shipped, installed, set-up and tested, which is typically measured in months. The process can be delayed further due to annual CapEx planning cycles and potential need to shuffle space within the data center facility.
  2. Polyglot Flexibility – With the explosion of access methods (BI tools, IDEs, Notebooks, Dashboards), analytic methods (ML, DL) and coding methods (languages and libraries) needed for data exploration and their potential need for operationalization, it is becoming more difficult to plan for the technology needs of an organization. Data scientists need the ability to quickly provision the tools, languages and technologies which best fit their needs of the problems they are trying to solve. As part of the discovery process, data scientists take a fail-fast approach to quickly test components of their hypothesis for viability. Their hypotheses are redirected or even completely abandoned, if determined unviable. Other discovery efforts may be concluded as one-and-done, where the results are valuable but there is no need to run the analytics on a regular basis.  In either of these cases, the tools, languages, and technologies may be quickly decommissioned. When valuable, repeatable analytics are discovered, these tools, languages and technologies are operationalized to become part of the production analytic ecosystem. This new fast paced paradigm of data exploration is unsustainable in an on-premises environment, whereas in the modern cloud architecture quickly provisioning or decommissioning assets is expected and welcome.
  3. Portability – Country and regional regulations are becoming more prevalent, requiring data to be housed within specific geographical boundaries. Establishing local on-premises environments and the skillsets to manage them can be both challenging and cost prohibitive. Organizations moving to modern cloud architectures can more quickly address location specific regulatory requirements, without the need to establish and manage new data centers. This can also apply to situations where there is a need to provide better local performance. Modern cloud architecture has global services availability via the dominant cloud service providers, however no one CSP has complete global reach. Organizations need to be diligent in selecting analytical applications and platforms that avoid CSP lock-in by investing in connected multi-cloud data platforms which can span CSPs, enable true global availability, solution agility and flexibility of choice.
  4. Platform Cascading - To extend assets lifespan, when new production hardware is made available, the older hardware is often cascaded down to Dev/Test/QA environments and the oldest taken out of the rotation. A good practice, but it further requires additional time and resources to execute. In modern cloud architectures, these environments can be expanded or upgraded independently, so the cascading of assets is no longer required.
  5. Proliferation of Data via Replication – On-premises environments are geared toward performance of production systems, insuring SLAs and business continuity plans are enforced. Non-production environments (e.g., DEV, Test, QA) and data exploration, are generally limited by their data availability. Users have limited or no direct access to production data because of production performance considerations, often resulting in sporadic, infrequent replication of partial data to these non-production environments.  In the modern cloud architecture, these non-production environments can have direct access to full production data, due to data sharing and auto scaling capabilities, greatly reducing the need to replicate full or partial subsets of data.   
 
The movement to utilize cloud services continues to grow at a rapid pace.Gartner predicts public cloud services will increase by 23.1% in 2021, reaching $332.3 billion, with $196,4 billion of it going to infrastructure services. There is a lot of value, and hype, on moving to the cloud. 
 
Evaluating the cost benefit of moving to the cloud is challenging. There are tangible costs that are easily quantifiable through financials. The intangible costs are much harder to quantify.However, the intangible costs undeniably have a huge impact, directly and indirectly, on business speed to new insight and time to value, as well as IT maintainability and solution agility.
Tags

À propos de Brian McGregor

Brian is a Senior Strategic Account Executive at Teradata, with over 20 years' experience in the Retail, Hospitality, Travel and Transportation industries. He specializes in enterprise analytics and customer experience management. Voir tous les articles par Brian McGregor

À propos de Dwayne Johnson

Dwayne Johnson is a principal ecosystem architect at Teradata, with over 20 years' experience in designing and implementing enterprise architecture for large analytic ecosystems. He has worked with many Fortune 500 companies in the management of data architecture, master data, metadata, data quality, security and privacy, and data integration. He takes a pragmatic, business-led and architecture-driven approach to solving the business needs of an organization.
  Voir tous les articles par Dwayne Johnson

Restez au courant

Abonnez-vous au blog de Teradata pour recevoir des informations hebdomadaires



J'accepte que Teradata Corporation, hébergeur de ce site, m'envoie occasionnellement des communications marketing Teradata par e-mail sur lesquelles figurent des informations relatives à ses produits, des analyses de données et des invitations à des événements et webinaires. J'ai pris connaissance du fait que je peux me désabonner à tout moment en suivant le lien de désabonnement présent au bas des e-mails que je reçois.

Votre confidentialité est importante. Vos informations personnelles seront collectées, stockées et traitées conformément à la politique de confidentialité globale de Teradata.