Containerizing Legacy: How New Architectures Make AI Pervasive

Photo credit: iStockphoto/AvigatorPhotographer

In application architectures, many data scientists see two and unequal halves.

They view containers as the modern architectures that power today’s cloud-native applications. Non-containerized, monolithic architectures power the rest. 

This approach has led to companies managing a hybrid environment, balancing both business needs and workload needs.  

“There is no silver bullet from an application architecture perspective. Monolithic [design], containers and microservices have their own advantages and challenges,” says Charlie Dai, principal analyst, Forrester.

While Dai admits cloud-native application development drive container adoption, “a hybrid approach will be the norm in the next five years.”

The problem with a hybrid approach is that it creates inefficiency, increases operating costs and makes the company less agile. 

AI workloads, where many sit on monolithic high-performance computing (HPC) hardware, are especially susceptible. While the reasons for choosing monolithic architectures are practical — faster access to massive data loads, for example — it makes them less portable. 

A tale of two architectures

Michael Yung, head of digital product and technology at Asia Miles, believes that we should not see the two architectures as adversarial. It should be complementary.  

Asia Miles uses AI and machine learning for customer profiling and propensity modeling and scoring, digital marketing, and business development through offering next best offers and recommendations to customers.  

“Yes, to me, it’s not one architecture versus another, and it's more about how the two architectures work together. There are really not enough business drivers to replace the monolithic architecture, and the container architecture is also too new to most IT teams in town,” he says. 

Companies like HPE are making it easier to containerize legacy monolithic apps when the business drivers emerge. Their Container Platform solution, which brings their earlier BlueData and MapR acquisitions together, allows companies to run cloud-native and non cloud-native applications with Kubernetes, whether it is on cloud, bare metal or at the edge. 

According to the company, the solution provides a secure multi-tenant control plane for deploying multiple on-premises or cloud-based Kubernetes clusters. It means companies can now containerize their monolithic, legacy AI workloads.  

“There are benefits to using containers to manage monolithic application deployments,” says Nidhi Ganeriwala, software solutions manager for APIJ at HPE.  

She sees that scaling container instances will be “far faster and easier than deploying additional virtual machines.” 

“Even if you use virtual machine scale sets, VMs take time to start. When deployed as traditional application instances instead of containers, the configuration of the application is managed as part of the VM, which isn't ideal,” Ganeriwala explains. 

Other advantages include deploying updates as Docker (the container runtime and orchestrator) images, “which is far faster and network efficient,” and the immutability of container design which means “you never need to worry about corrupted VMs,” she adds. 

Such a solution can help companies to tackle the top challenges that stop companies from containerizing monolithic infrastructures. They include difficulties in maintenance and upgrade, a tightly coupled multi-tier architecture, interconnectedness and interdependence and the use of obsolete coding languages. 

Simplifying container management

Just ask GM Financial. The financial giant wanted to streamline its operationalization of AI and machine learning to meet their data science expectations. 

Using BlueData (which is part of HPE Container Platform), GM Financial enabled multiple AI and ML use cases, including credit risk predictive modeling. The quick automated deployment ensured the company had faster time-to-insights. 

HPE is not stopping there. The Container Platform on HPE Synergy allows companies to “assemble and reassemble” resources on demand, adding agility and scalability benefits to the hardware layer. 

It speeds up deployment from “power-on to developer ready” within hours — not weeks or months. One-click deployment and self-service portals simplify management. 

A matter of choice

An IDC Study notes that 51% of new container deployments are for modernizing legacy apps. The rest are for cloud-native microservices. 

Previously, companies did not have much of a choice of the platform for their AI workloads. And once you are on the monolithic path, you do not have a choice to switch — at least not that easily. 

“Applications running in containers with multi-tiered application architecture have better portability with traditional VM-based stack; while applications with microservices application architecture have maximum elasticity but require substantial refactoring and dependency management compared to traditional approaches,” says Forrester’s Dai.

With solutions like HPE Container Platform, now you do. It gives the company’s data scientist and infrastructure manager the flexibility of choice. 

For Asia Miles’ Yung, the question then becomes a matter of business justification.  

“Therefore, the best way is really to pick the best sub-systems/modules to move to the container platform and leave the other as is — until one day the business can justify a transformation of the basic architecture,” he adds.

This article is part of a CDOTrends eGuide. You can download the entire copy here.

Photo credit: iStockphoto/AvigatorPhotographer