Open source, cloud native applications have driven the rise of powerful, mission-critical compute clusters over the past decade. The DevOps movement has enhanced momentum in service delivery and interaction, and the public cloud has made it easy for organizations of all sizes to take advantage of these benefits.
With just a few clicks, you can spin up the compute, storage and networking resources you need and deploy powerful applications such as Hadoop, Tensorflow or your own workloads right on top. And the proliferation of free, task-specific open source software packages has made it easy for IT organizations to grow effectively and take full advantage of these applications.
But public cloud isn’t the be-all and end-all of computing. The more services you use, the more complex your implementation becomes — and the more unpleasant billing surprises you see. Use higher-level services, and tech debt mounts as you get more bespoke and locked in. Hybrid cloud addresses some of these challenges but results in organizations managing two distinctly different environments in parallel, creating the risk of things falling through the cracks.
What if you could get all the benefits of open source, cloud native applications by running them in your own data center?
The elephant in the room is operations. Within the comfort of public clouds, operations is an afterthought. Outside of that environment, however, reality smacks you in the face.
You might think you can’t do it on premises. With cloudlike automation and orchestration, you can. And only Platina makes it happen.
The Complexity of Cloud
Many of today’s open source, cloud native applications assume a base level of infrastructure that is automated and orchestrated in a cloudlike manner. These applications require the provisioning of bare metal hardware, an operating system, a virtualization layer, a container orchestrator, a configuration management system, a secrets management platform, telemetry and observability systems and all of the related networking and storage.
For many organizations, this complex level of infrastructure is really difficult to replicate in their own data centers because of the time, manual effort and expertise required to complete such massive projects.
The public cloud is not magic.
At its core, it is still a collection of physical hardware and software that runs in a data center somewhere and needs to be racked, stacked, cabled, provisioned, managed and upgraded by humans. It needs a lifecycle management system of its own.
Many enterprise teams can successfully rack, stack, cable and provision systems; Day Zero and Day One operations are often the easiest milestones to achieve. Day Two — management and upgrades — is where cost and complexity enter, and they can outweigh all the benefits you’re trying to achieve. This leaves a huge gap between the private data center and the public cloud.
Even the major components of cloud native infrastructure, such as Kubernetes, start their lifecycle on a fully provisioned server with appropriate networking and storage systems. This adds complexity for whoever is implementing it — something you don’t have to think about in the cloud, because the cloud providers have done this work for you.
But what if you’re not running on a major public cloud whose underlying infrastructure and network are configured in a specific way. Where do you turn?
Traditionally, domain specific administrators each would have handled server provisioning, network configuration and storage management, respectively. But today’s modern enterprises employ DevOps engineers and site reliability engineers (SREs) who perform higher-level tasks. This base-layer infrastructure work has been outsourced to the public cloud providers, whose core competencies are managing hardware, provisioning and cluster management. They do this better than anyone else because of their intense focus on running bare metal, virtual machines and containers.
Who is going to do that work for you on premises?
You could write a script today, but with infrastructure and application dependencies growing more complex by the day, who knows if it will work tomorrow? Cluster management frameworks aren’t something that DevOps and SRE teams have necessarily signed up for.
There are disparate tools that help with the basic provisioning of bare metal, such as installing operating systems and deploying basic applications, but few think holistically about the entire infrastructure lifecycle.
Platina Systems is here to fill the gap with full cloudlike orchestration and complete lifecycle management for the hardware and software infrastructure of modern hybrid clouds. When it comes to running cloud native, open source software on premises, Platina changes the answer from “you can try” to “you can.”
Platina Command Center (PCC) enables on-premises orchestration of computing infrastructure within a single private portal — using Infrastructure-as-Code principles to discover, provision, monitor and remediate the entire cluster in an easy-to-use, turnkey solution.
PCC integrates best practices to manage underlying hardware, network and storage configurations through code and metadata in a single framework. This eliminates the scribbled checklists of steps that are passed between subject matter expert teams and delay developer productivity.
Further, PCC uses the leading open source infrastructure management technologies — including Ceph for storage, HAProxy for load balancing and Kubernetes for container orchestration — used in most public cloud platforms. By following best practices to handle the installation, configuration and deployment of these common open source packages, PCC makes it quick and easy to use these tools in a predictable and repeatable manner, reducing time, complexity and expense.
Because we make the infrastructure layer so simple, your remote hands staff in the data center (or even a managed services provider) can now do the physical racking and stacking, then hand off the rest of the work to PCC — work that a traditional systems or network administrator would have to do. And Platina provides the continuous, automated monitoring of infrastructure and services to keep everything in check.
With this approach, we’re helping you prevent infrastructure drift and bringing repeatability to infrastructure lifecycle management.
The Old Way vs. the Platina Way
What does it take to bring cloud native infrastructure to your private data center? Let’s compare how to manually do it against how to accomplish it with Platina:
Discovery and Provisioning
Discovery and provisioning is typically a laborious process that involves spreadsheet manipulations, barcode guns and manual network configuration. You have to set up a DHCP server, obtain DHCP leases and find your hardware, then copy and paste MAC addresses, IP addresses and serial numbers into spreadsheets. Fight with TFTP servers and PXEboot to run burn-in images. And don’t forget to send someone to the rack with a crash card to install an operating system on multiple servers, one at a time.
Instead, Platina automates the discovery and provisioning of the servers, storage and networking hardware you need to run cloud native applications at scale, on premises. Use PCC to design your environment, create orders for system provisioning and execute tasks. PCC will search the network to find eligible hardware, collect system inventory and provision operating systems.
Configuration Management and Platform Deployment
It usually takes two or more senior software architects to define your configuration management platform, secrets management platform and core services platform. A third-party auditor has to review the system for compliance with role-based access control (RBAC) and other standards. Deployment requires hours of time spent writing and debugging infrastructure code that has been written across the industry numerous times before by others.
Instead, use Platina to automate the configuration management and critical platform software deployment. After an OS is deployed, PCC automates the rollout of critical system functionality, including network address assignments, recursive DNS settings, NTP assignments, RBAC, secrets and certificates. From there, PCC can deploy storage applications such as Ceph, network applications such as HAProxy and container management platforms such as Kubernetes in an orchestrated fashion.
Begin Running Software
When you are finally ready to deploy your applications, they have no native orchestration built in, so you must spend the time and effort to wrap them into the configuration, secrets and platform management systems you created in the prior step. Your architects will spend time learning the guts of a complex open source project so they can connect it into your homegrown orchestration system, potentially adding months to your software development timeline.
Instead, use Platina’s operations orchestration to describe and deploy your clusters and how they should run. Achieve Day Zero and Day One in minutes, not months.
Monitor and Remediate
Once everything is up and running, you need dedicated software architects to manage upstream patches and new releases and wire in new functionality to the orchestration system.
Instead, let Platina manage the complexity of your underlying environment. Complexity only increases throughout the modern infrastructure lifecycle, so we provide continuous, automated orchestration.
In turn, that frees up your developers, SREs and security teams to be more productive and perform more valuable work.
Why Private and Hybrid Cloud with Platina?
Platina helps you move key workloads from public to private or hybrid cloud, which has benefits across the organization, especially in these areas:
Data Gravity and Access
More than half of the world’s data isn’t or can’t be stored in the public cloud. We make it possible to use today’s cloud native applications on premises, near your data and near your users, to unlock the value of all of your data. Petabyte-scale private cloud can’t be realized with hand-curated configurations; you need a framework to systematically deploy, monitor and remediate your cluster in a secure manner that is compliant with all applicable regulations.
Public cloud dependency has become a fact of life for many businesses, but is it actually saving you money? It’s highly unlikely that any company knows for sure, because public cloud cost models are so complex and impossible to accurately project.
For example, it may be free to move your data to the cloud and cheap to store it there, but if you want to actually use it outside of the public cloud, you’ll probably face significant egress charges. Policies can also drastically change costs unexpectedly because, well, you probably don’t know how the policy is tied to other moving parts. These are just a couple of the many factors that can make your costs unpredictable, and it can change your total cost of ownership for the worse.
The same app, but with different parameters, can have wildly different costs.
A Better Way Forward
Private cloud doesn’t have to be a return to the old days.
With Platina doing the infrastructure heavy lifting, private cloud is the best way to take advantage of today’s open source technology stack. We’re helping the world’s largest enterprises reduce the time, expertise and expense needed to implement modern cloudlike infrastructure.
Are you ready to make the leap? Request a demo today.