In 2010, the global datasphere – the amount of data created and consumed in the world each year – was estimated at 2 zettabytes (ZB). Now, the global datasphere will grow from 45 ZB in 2019 to 175 ZB by 2025, IDC reports. The majority of that generated data – 59% – is expected to be stored outside the public cloud.
Platina CEO Mark Yin and CTO Frank Yang have each been in the technology sector for more than 20 years. Together, they have seen firsthand the value of cloud computing, including reduced friction for developers, minimized toil for operations teams and improved visibility for security teams. They have also seen, however, a tremendous strain associated with data stuck on the sidelines: massive datasets that can’t, or won’t, ever be migrated to the cloud.
In 2015, Mark reached out to Frank to discuss what was going to happen in the next decade of computing infrastructure. Mark had been working with original design manufacturers in the white box networking space and was carefully thinking about what was happening in the world of hyperscale-style infrastructure. Frank, a doctor of engineering from Stanford University, knew that to succeed in scale computing, developers would need to manage their infrastructure as code.
Together, Mark and Frank built out a vision to bring the benefits of the cloud to software developers, operations teams and security professionals — but to do so on-premises, to unlock the potential of those massive datasets sitting on the sidelines.
Hyperscale Benefits at Private Cloud Scale
There’s a few things to know about traditional data center infrastructure. Enterprises typically manage their servers, networking and storage gear separately and individually, often with a dedicated team managing each area. Each team has its own design patterns, standards and procedures. Skills between teams aren’t often transferable on the fly. Different vendors in each area have different commands and configuration syntax to learn. This all leads to complexity in managing configurations, security and operations — general toil and overhead just to keep infrastructure up and running.
In the cloud world, the ultimate secret of hyperscalers such as Amazon, Google, Microsoft and Facebook is that they treat everything they have as though it is a server. There’s no special configuration syntax for a network device or proprietary GUI for a storage system; everything is just a server.
The hyperscalers’ servers run Linux. Their network switches run Linux. Their storage systems run Linux. That harmonization of the environment means that every element of their infrastructure becomes structured and can be subsequently managed as code, hence Infrastructure as Code (IaC).
As consumers of the public cloud, we benefit tremendously from having infrastructure presented as structured data. It allows us to code against the infrastructure, to introspect the infrastructure and discard it to be recycled as new systems when we no longer need it.
But for data on the sidelines that can’t, or won’t, be migrated to the cloud, the benefits are lost! And oftentimes, this is some of a company’s most valuable data.
Implementing your own private cloud from scratch is time consuming, error prone, and expensive. Abstracting proprietary network and storage gear so it can be managed as code is a wasteful exercise at single- or multi-rack scale. Enterprises that have become comfortable with the cloud can’t transition back to old-school bare metal management practices because they slows developers down. This leaves the enterprise in a predicament on how to take advantage of public cloud’s benefits for private cloud data sets.
Frank believes that everything should look like a server so it can be managed as part of the herd, and Mark knows that the network is a point of conflict for developers, operations teams and security teams. They both know there’s more data than ever sitting on the sidelines waiting to generate value in business, unaddressable by today’s public clouds. So they decided to make the network and storage look like servers, described as structured data, and managed with code — just like the hyperscalers do, but at private cloud scale.
Unlock the Value of Data
Platina Command Center, the company’s flagship product, enables on-premises orchestration of network, servers and storage, just like in the public cloud. Using Command Center, developers, site reliability engineers and security teams can provision bare metal compute, Kubernetes clusters and Ceph-backed storage, with automated networking. This enables them to focus on unlocking the value of enterprise data, instead of the toil of the data center and the cost of public cloud.
Platina Systems activates the value of data by simplifying infrastructure operations at a fraction of the cost of using outside storage. By converging scalable computing within storage, small and medium-size businesses can deploy and manage systems for artificial intelligence and active archives, simplify on-premises customer deployments and enable developers to focus on development and innovation.
For the world’s largest enterprises, Platina Systems technology minimizes data movement, extracts long-term value from their digital assets and reduces the time and expertise needed to implement modern cloud-like infrastructure.
Today, Mark Yin and Frank Yang’s vision to bring cloud’s benefits on-premises is a reality. To see how Platina Systems can bring these benefits to you, request a demo.