Hasmik Zmoyan
on 29 September 2023


This article was last updated 1 year ago.

Ubuntu AI podcast

Welcome to Ubuntu AI podcast! From fun experiments to enterprise projects, AI became the center of attention when it comes to innovation, digital transformation and optimisatation.

Open source technologies democratised access to state of the art machine learning tools and opened doors for everyone ready to embark on their AI journey.

Ubuntu AI will be your guide in the industry where community of like-minded people meet, to make an impact. Today we will discuss observability and MLOps.

https://open.spotify.com/episode/1UP2VscxiFrjwJsXoAbbOV?si=dRApbFecRSCt5T3XcdSPfQ
Listen to Ubuntu AI podcast on sporify

In this episode we’re talking about observability in MLOps

When we think of observability, we talk about alerts, dashboards, logs, things that are helping people know better what’s happening with their system. Is it correct?

People used to talk about the three pillars of observability. I kind of don’t agree with that way of looking at it. But it’s, it’s basically in its simplest form, it’s logs, metrics and traces.

What are your thoughts on why is it important to monitor your MLOps stack and also your MLOps work is not just the stack itself, right?


So there are two different main types of monitoring that you have in such systems. One is the infrastructure itself. That would be the standard stuff that Azeris are doing because MlOps Stack is just yet another compute workload. It’s still operates on top of some Kubernetes cluster or on top of a public cloud. There are jobs, functions, containers running and you need to do basic observability for all of those.

However, there is also additional layer of observability that is more business centric and centred around what mlops pipeline is doing and what its actual use. So basically monitoring the data that is flowing through the entire system, checking for data drifts, checking for changes for like missing data or any kind of anomaly detection and patterns that are changing and we don’t expect them to change are important to be flagged as alerts in this kind of setup because you don’t want to run the model on production, which is fed no data for a long time or this kind of issues.

And also since we are going to the models, the models themselves, they also can have things like model drift or even an API of a model is not accessible.

Listen to the full episode in all major podcast streaming platforms. Subscribe to our channels to be among the first listeners of the podcasts by Canonical.


Newsletter
signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts


Andreea Munteanu
17 February 2025

7 considerations when building your ML architecture

Article AI

As the number of organizations moving their ML projects to production is growing, the need to build reliable, scalable architecture has become a more pressing concern. According to BCG (Boston Consulting Group), only 6% of organizations are investing in upskilling their workforce in AI skills. For any organization...

Andreea Munteanu
17 February 2025


Andreea Munteanu
12 February 2025

AI in 2025: is it an agentic year?

Article AI

2024 was the GenAI year. With new and more performant LLMs and a higher number of projects rolled out to production, adoption of GenAI doubled compared to the previous year (source: Gartner). In the same report, organizations answered that they are using AI in more than one part of their business, with 65% of respondents

Andreea Munteanu
12 February 2025


Felipe Vanni
11 March 2025

Join Canonical at NVIDIA GTC 2025

Partners AI

Canonical, the company behind Ubuntu and the trusted source for open source software, is thrilled to announce its presence at NVIDIA GTC again this year. Join us in San Jose from March 18th to the 21st and explore what’s next in AI and accelerated computing. Register for NVIDIA GTC 2025 Unlock AI potential: innovate on

Felipe Vanni
11 March 2025