Kubernetes and OpenStack solving AI complexities at scale
Canonical
on 4 June 2018
Tags: AI , containers , Interviews , Kubeflow , kubernetes , machine learning , MicroK8s , ML , OpenStack , Snaps
Stu Miniman and John Boyer of theCUBE interviewed Stephan Fabel, Director of Ubuntu Product and Development at the OpenStack Summit in Vancouver. Read on for the full interview, and to hear more on Kubernetes, Kubeflow and MicroK8s.
OpenStack has entered a new phase, maturing to a point where those in attendance are serious about open infrastructure, and they are serious about implementing Kubernetes.
Interest in Kubernetes is high, yet, there remains an element of pathfinding that needs to be done on the Kubernetes side, but what needs to made clear is that the technology can work perfectly well with OpenStack.
Stephan Fabel told theCUBE: “What we see a use case for Kubernetes and OpenStack to work perfectly well together, be an extension of each other, possibly also sit next to each other.”
“Having something like Kubernetes container based developer APIs that are providing that orchestration layer are the next thing, and they run just perfectly fine on Canonical OpenStack.”
Kubernetes and containers are also an important method for addressing the problem of scale when it comes to dealing with other industry-wide hot topics – AI and machine learning.
Artificial Intelligence isn’t new. As a subject matter it has been present for decades, but it’s never truly been a realistic possibility until recent technological advancements moved it from being solely a topic of science fiction.
True AI is still in its infancy but some sizeable problems are already being identified. One of them being the issue of scale, and the complexity that will inevitably create.
Smart cities, connected cars, the productization of machine learning applications, and more, spells large scale complexity. Use cases that require vast scale, such as these, creates a requirement for machine learning to be managed – a problem that containers and Kubernetes can solve.
Canonical Founder & CEO Mark Shuttleworth on automating the data centre.
This isn’t just a pipe dream either, Fabel said: “Every single car manufacturer that we’re talking to is interested in solving the operational problem of running machine learning frameworks as a service.” The operational challenges presented by this are difficult to overcome, but customers are seeking solutions from Canonical.
Kubeflow, a machine learning framework that runs on top of Kubernetes, is becoming one of the key enablers of machine learning frameworks as a service – on top of Kubernetes, that can lower the barriers to getting started with this new generation of workloads.
Fabel said: “I think they’re a great example because they can really show how that as a service notion can be implemented on top of a virtualisation platform,” whether that is KVM, bare-metal, or on OpenStack.
MicroK8s
Finding a starting point is often one of the biggest challenges. So lowering the barrier to entry to something like Kubernetes can be vital.
MicroK8s started as a solution to the problem of standing up a Kubernetes cluster on a public cloud when there’s no internet connection, or working on a local laptop. Kubeflow, can also run on MicroK8s, with offerings such as TensorBoard, and the ability to work in a fashion that is compatible to an on-premises and public machine learning framework.
That was just the start and MicroK8s was formed as a solution to offerings in the market that tend to be more cumbersome than developers would like.
Working like a Snap on your machine, you have a Kubernetes API, and in no time at all the user can enable DNS and be up and running.
Run Kubeflow anywhere, easily
With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.
Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like
katib or pipelines-ui.
Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.
What is Kubeflow?
Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.
Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and
configurable steps, with machine learning specific frameworks and libraries.
Install Kubeflow
The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple,
portable and scalable.
You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install
with MicroK8s on any of these environments and can be scaled to high-availability.
Newsletter signup
Related posts
How to deploy AI workloads at the edge using open source solutions
Running AI workloads at the edge with Canonical and Lenovo AI is driving a new wave of opportunities in all kinds of edge settings—from predictive maintenance...
Deploy GenAI applications with Canonical’s Charmed Kubeflow and NVIDIA NIM
It’s been over two years since generative AI (GenAI) took off with the launch of ChatGPT. From that moment on, a variety of applications, models and libraries...
Join Canonical in Brazil at Dell Technologies Forum São Paulo
Canonical is excited to be a part of the Dell Technologies Forum in São Paulo on October 30th. This exclusive event brings together industry leaders to...