Main Menu

Recent posts

#21
Ubuntu Blog / Modern Linux identity managem...
Last post by tim - Apr 01, 2026, 02:01 PM
Modern Linux identity management: from local auth to the cloud with Ubuntu



The modern enterprise operates in a hybrid world where on-premises infrastructure coexists with cloud services, and security threats evolve daily. IT administrators are tasked with a difficult balancing act: maintaining traditional local workflows while managing the inevitable shift toward cloud-native architectures. Identity has emerged as the new security perimeter, replacing traditional network-based defenses. 

At Canonical, we have developed a comprehensive framework to make identity management across Ubuntu server and desktop deployments more secure, bridging the gap between Active Directory legacy environments and modern cloud identity providers.

In this blog, we will explore how Canonical's framework improves the security of  authentication and access management controls by bridging the gap between traditional Active Directory environments and modern cloud identity providers.

The foundation: local authentication and its limits

Traditionally, Linux authentication mechanisms relied on credentials maintained in local /etc/passwd and /etc/shadow files. While functional for small, isolated deployments, this approach becomes unmanageable at an enterprise scale. Manual user provisioning is error-prone and time-consuming. It also creates significant security vulnerabilities, particularly when employee access rights change, or users leave the organization.

To ensure modern enterprise systems are protected, it is essential that organizations move beyond these isolated islands of identity. At a minimum, organizations should have centralized authentication for authoritative sources, ensuring consistency whether users are accessing desktops, SSHing into servers, or executing privileged commands.

Active Directory has long been the primary solution, leading many enterprises to integrate all their endpoints into expanding Forests. However, this approach is becoming less viable due to limitations with Kerberos security, the explosion in the number of connected devices, difficulties implementing Multi-Factor Authentication (MFA), and the inability to operate effectively over the public internet. While we have invested heavily in the ecosystem with ADsys , we started looking at how to bring Linux authentication to the modern era.

The cloud shift: modernizing Linux authentication with authd

For organizations embracing cloud-native identity providers (IdPs) like Microsoft Entra ID (formerly Azure Active Directory) and Google Cloud IAM, Canonical has developed authd . This solution addresses the historical barriers that prevented Linux systems from integrating seamlessly with cloud identities.

Authd operates a modular broker architecture. This design separates the core authentication functionality in the daemon from the provider-specific integration logic, allowing Ubuntu to support multiple identity providers simultaneously. A key innovation here is our implementation of the OAuth 2.0 Device Authorization Grant (RFC 8628 ). This flow allows users to authenticate on a separate device, such as a smartphone, which is particularly helpful for headless servers, or SSH connections where a web browser is not available.

Through Authd, you enable:

  • Multi-Factor Authentication (MFA): on both desktop and servers, leveraging the IdP's native capabilities and security policies.
  • Offline access: credential caching allows users to authenticate even when disconnected from the internet, or the identity provider, a requirement for mobile workstations.
  • Identity broker flexibility: admins can install specific brokers (like authd-msentraid or authd-google) as snap packages.
  • Privilege management: centrally grant or revoke sudo privileges based on Identity provider group membership, without manually editing local /etc/sudoers files on individual machines.
  • Centralised auditing and governance: Ubuntu authentication events are logged alongside your SaaS applications.
The enterprise bridge: Ubuntu Pro and Active Directory System Services (ADSys)

For enterprises deeply invested in on-premises infrastructure, we provide Active Directory System Services (ADSys ). ADSys fills the void left by traditional System Security Services Daemon (SSSD) implementations by serving as a fully functional Group Policy client for Ubuntu.

Available with an Ubuntu Pro subscription, ADSys allows administrators to manage Ubuntu fleets using the same tools and workflows established for Windows. By installing administrative templates on Domain Controllers, you can enforce policies natively through the Group Policy Management Console.

Key technical benefits of ADSys include:

  • Native Group Policy Object (GPO) support: we map Windows GPOs directly to Ubuntu settings, applying computer policies at boot and user policies at login.
  • Privilege management: administrators can grant or revoke sudo privileges to Active Directory users and groups centrally, without modifying local /etc/sudoers files on individual machines.
  • Automated script execution: we support scheduling scripts to execute at system startup, shutdown, login, or logout, enabling automated remediation of configuration drift.
  • Dconf management: administrators can lock down desktop settings, such as forcing screen lock timeouts or setting specific wallpaper configurations.
  • AppArmor profiles management: we allow the enforcement of custom AppArmor profiles on clients to restrict application capabilities system-wide.
  • Certificate auto-enrollment: the certificate policy manager allows clients to enroll for certificates from Active Directory Certificate Services (AD CS). Certificates are then continuously monitored and refreshed by the certmonger daemon.
Conclusion

Identity management is the foundational security control for modern enterprise Ubuntu deployments. Whether your infrastructure relies on the robust, established hierarchies of Active Directory, or the agile, decentralized nature of the cloud, we provide the tools to improve its security.

In our newly released whitepaper we provide actionable blueprints and technical specifications to architect, define, and enforce robust identity management controls across your entire server and desktop fleet, regardless of operating system.

 We provide a technical examination of modern identity paradigms, including detailed configurations for managing access to cloud and on-premise Linux infrastructure, and practical strategies for seamless and secure integration with legacy AD Domain Services. Furthermore, the paper offers a detailed analysis of the advantages and implementation steps for using SSH certificates for frictionless, auditable SSH authentication, moving beyond simple key management.



Want to learn more about enterprise identity management for Ubuntu Server and Desktop?
Download the whitepaper

Further reading

The modern enterprise operates in a hybrid world where on-premises infrastructure coexists with cloud services, and security threats evolve daily. IT administrators are tasked with a difficult balancing act: maintaining traditional local workflows while managing the inevitable shift toward cloud-native architectures. Identity has emerged as the new security perimeter, replacing traditional network-based defenses.  At Canonical, [...]


Categories: Active Directory, Authd, Authentication, Identity Management, Ubuntu, Ubuntu Desktop, Ubuntu Pro, Ubuntu Server
Source: https://ubuntu.com//blog/modern-linux-identity-management-ubuntu Mar 27, 2026, 09:29 AM
#22
9to5Linux / KaOS Linux 2026.03 Is Out wit...
Last post by tim - Apr 01, 2026, 02:01 PM
KaOS Linux 2026.03 Is Out with Linux 6.19, More systemd Components Removed



KaOS Linux 2026.03 distribution is now available for download with Linux kernel 6.19, Niri 25.11 compositor, Noctalia 4.7 desktop shell, and more.

The post KaOS Linux 2026.03 Is Out with Linux 6.19, More systemd Components Removed  appeared first on 9to5Linux  - do not reproduce this article without permission. This RSS feed is intended for readers, not scrapers.


Categories: Distros, News, KaOS, KaOS Linux, Linux distribution
Source: https://9to5linux.com/kaos-linux-2026-03-is-out-with-linux-6-19-more-systemd-components-removed Mar 28, 2026, 11:37 PM
#23
Ubuntu Blog / Canonical welcomes NVIDIA’s d...
Last post by tim - Apr 01, 2026, 02:01 PM
Canonical welcomes NVIDIA's donation of the GPU DRA driver to CNCF

At KubeCon Europe in Amsterdam, NVIDIA announced that it will donate the GPU Dynamic Resource Allocation (DRA) Driver to the Cloud Native Computing Foundation (CNCF) . This marks an important milestone for the Kubernetes ecosystem and for the future of AI infrastructure.

For years, GPUs have been central to modern machine learning and high-performance computing workloads, yet integrating them into Kubernetes has required specialized tooling and vendor-specific components. The donation of the DRA driver represents a shift toward deeper standardization of GPU orchestration in cloud-native environments. By bringing this technology into the CNCF ecosystem, NVIDIA is helping ensure that advanced GPU scheduling capabilities evolve in the open, alongside the broader Kubernetes community.

This contribution strengthens Kubernetes as the platform for large-scale AI workloads and provides a foundation for more flexible, programmable GPU resource management. To understand why this matters, it helps to look at the broader NVIDIA GPU ecosystem that powers AI workloads on Kubernetes.

The NVIDIA GPU ecosystem for Kubernetes

As of 2026, the NVIDIA GPU stack in Kubernetes is organized into three major layers: the GPU Operator, the Modern Resource Stack built around DRA, and advanced orchestration capabilities such as the Kubernetes AI (KAI) Scheduler. Together, these components transform GPUs from simple hardware accelerators into fully orchestrated infrastructure resources.

The GPU operator: automating GPU infrastructure

The NVIDIA GPU Operator  automates the lifecycle management of the software required for GPUs to function inside a Kubernetes cluster. Instead of requiring administrators to manually configure drivers, runtimes, and monitoring tools, the operator deploys and manages these components automatically. This provides a consistent, production-ready environment for GPU workloads.

Typical components deployed by the operator include:

  • NVIDIA Driver: The kernel modules and userspace libraries required for GPU operation are installed through a containerized driver manager.
  • NVIDIA Container Toolkit: This component integrates GPUs with container runtimes such as containerd or CRI-O, allowing containers to access GPU hardware and CUDA libraries on the node.
  • GPU Access Layer: Clusters traditionally used the NVIDIA device plugin to request GPUs using simple integer values. With the introduction of the DRA driver, clusters can adopt the new Kubernetes-native resource model instead. The GPU driver will install and manage the DRA driver for GPUs in an upcoming release. The use of the device plugin and DRA driver in the same cluster is and will remain mutually exclusive.
  • DCGM Exporter: Exports telemetry such as power usage, temperature, and utilization metrics to Prometheus for monitoring.
  • GPU Feature Discovery (GFD): automatically labels Kubernetes nodes with GPU capabilities, such as memory size or CUDA support.
  • NVIDIA MIG Manager: allows modern GPUs such as NVIDIA H100 , NVIDIA H200 , and NVIDIA Blackwell  to be partitioned into multiple logical GPU instances using Multi-Instance GPU (MIG) technology.

The GPU Operator therefore acts as the operational backbone of GPU infrastructure in Kubernetes clusters.

The DRA driver: a modern resource model for GPUs

The DRA driver represents the next generation of GPU resource management for Kubernetes. Historically, Kubernetes treated GPUs as simple integer resources. A workload would request something like nvidia.com/gpu:1. While effective, this model lacked the expressiveness needed for modern AI workloads.

DRA introduces a richer model based on ResourceClaims, enabling applications to request very specific hardware capabilities rather than just a count of GPUs.  

Examples include:

  • Requesting GPUs connected through NVIDIA NVLink
  • Requesting a specific GPU slice
  • Allocating GPUs across nodes that share memory domains

This level of control becomes essential for modern training workloads, which often rely on tightly coupled GPU communication.

DRA also introduces several important capabilities:

  • ComputeDomains: This abstraction enables multi-node NVIDIA NVLink  communication. Systems (such as GB200) can allow workloads across multiple nodes to behave as if they are running on a single massive GPU. 
  • Container Device Interface (CDI): Instead of relying on environment variables such as NVIDIA_VISIBLE_DEVICES, CDI injects devices into containers through a standardized interface, improving reliability and portability. 

With the DRA driver moving to the CNCF, these capabilities become part of a broader open ecosystem for accelerator orchestration.

The KAI scheduler: AI-aware scheduling

Running AI workloads efficiently requires more than just allocating GPUs. It requires scheduling decisions that understand how AI jobs behave. The KAI Scheduler adds a layer of intelligence on top of Kubernetes scheduling. It builds on top of the GPU Operator and the DRA driver to enable more advanced resource coordination.  

Key capabilities include:

  • Fractional GPU allocation: Multiple workloads can share a GPU using memory partitioning or time slicing.
  • Hierarchical queuing: Teams can be assigned GPU quotas, and the scheduler manages fairness and prioritization within those quotas.
  • Gang scheduling for distributed training: Large training jobs often require dozens or hundreds of GPUs simultaneously. KAI ensures these jobs start only when the required resources are available, preventing partially allocated clusters that sit idle.

These capabilities are critical for organizations running large-scale training pipelines or shared AI platforms.

Why the CNCF donation matters

The donation of the DRA driver to the CNCF represents a significant step toward making advanced GPU orchestration a first-class citizen of the Kubernetes ecosystem. It accelerates the adoption of Kubernetes-native resource models for GPUs, encourages community-driven innovation, and strengthens the foundation for large-scale AI workloads. As AI infrastructure becomes increasingly central to modern platforms, open collaboration around core technologies like GPU scheduling and resource allocation will play a key role in shaping the next generation of cloud-native systems.

Canonical Kubernetes: a platform for cloud-native AI infrastructure

Running modern AI workloads requires more than GPUs and schedulers. It requires a Kubernetes platform that is secure, easy to operate, and capable of supporting large-scale, hardware-accelerated workloads.

Canonical provides a Kubernetes distribution designed to deliver exactly that. Canonical Kubernetes is a lightweight, secure, and opinionated Kubernetes distribution that includes all the components required to deploy and operate a production-ready cluster. It bundles the essential services needed for Kubernetes clusters, including the container runtime, networking (CNI), DNS, ingress, and other operational components, so that teams can deploy and manage clusters with minimal operational overhead.  

By building directly on upstream Kubernetes, Canonical Kubernetes maintains compatibility with the broader cloud-native ecosystem while simplifying lifecycle management. Security updates and upstream Kubernetes releases are delivered in a streamlined way, allowing teams to stay current without the operational complexity typically associated with cluster maintenance. Canonical Kubernetes is designed to support deployments across a wide range of environments; from small clusters used for experimentation to large enterprise deployments operating across multiple regions. The platform integrates naturally with Canonical's broader open infrastructure stack and benefits from the reliability and security of Ubuntu. 

For organizations running AI workloads, this provides a stable foundation on which the NVIDIA GPU ecosystem can operate. Components such as the GPU Operator, the DRA driver, and advanced schedulers can be deployed on top of Canonical Kubernetes to enable GPU-accelerated machine learning pipelines, distributed training clusters, and scalable inference platforms.

Together, Canonical Kubernetes and the evolving NVIDIA AI infrastructure ecosystem provide the building blocks needed to run modern AI infrastructure using open, cloud-native technologies.

Further reading

At KubeCon Europe in Amsterdam, NVIDIA announced that it will donate the GPU Dynamic Resource Allocation (DRA) Driver to the Cloud Native Computing Foundation (CNCF). This marks an important milestone for the Kubernetes ecosystem and for the future of AI infrastructure. For years, GPUs have been central to modern machine learning and high-performance computing workloads, [...]


Categories: AI/ML, KubeCon, nvidia, Ubuntu
Source: https://ubuntu.com//blog/canonical-nvidia-kubecon-2026 Mar 24, 2026, 05:21 PM
#24
Ubuntu Blog / Hot code burns: the supply ch...
Last post by tim - Apr 01, 2026, 02:01 PM
Hot code burns: the supply chain case for letting your containers cool before you ship

The breach we got, and the one that's coming

In September 2025, dozens of popular JavaScript packages,  like chalk and debug, were compromised on the npm registry. These packages are so ubiquitous they end up in everything: front-end apps, back-end microservices, and CI tooling. Developers didn't do anything wrong, they just ran the same command they always do: npm install chalk. But then the malware arrived silently.

This wasn't a bug in an operating system. It wasn't a virus on someone's laptop. It was a supply chain attack: someone had poisoned the ingredients developers use to build their software. Nothing exotic,  just one developer getting phished, one malicious publish, and millions of downstream consumers letting it in because it looked like a legitimate update.

Indeed, it was a legitimate update. The publisher didn't intend to include malware, or know it was there.

That was just npm. Now imagine the same technique targeting the system libraries your containers depend on before your application even runs, things like libcurl, zlib, or openssl. It would compromise the foundation underneath everything else you run or build.

Welcome to the temperature problem of supply chain security. The industry is shipping code that's still too hot to handle.

Two philosophies for building containers

A growing share of modern software runs inside containers. But whether the code inside has had time to cool, or whether it's served straight off the upstream burner, varies dramatically across the industry.

The nightly-rebuild approach

One increasingly popular philosophy works like this: take the latest version of every package from upstream, rebuild the container image from scratch every night, use tooling to sign it, verify it, and minimize its footprint. On paper, it looks bulletproof. If the source is clean, you can ship good code quickly. If a bug is fixed upstream, you get the patch in your next nightly rebuild.

But if the source is poisoned?

You've just built and signed a perfectly minimal, fully traceable, enterprise-grade malware delivery mechanism. With reproducibility, no less. The backdoor doesn't care about your beautiful infrastructure.

You're serving code straight from the upstream oven. No cooling rack. No resting time. No one checked the temperature.

The intentional update approach

Ubuntu takes a different path. Stable releases ship every two years. Package versions are frozen and security fixes are applied through surgical backports, which means patching vulnerabilities without pulling in new features or unreviewed upstream changes. Updates ship intentionally, with context.

It's not flashy,  but it's calm, deliberate, and predictable.

You don't get nightly rebuilds: you get stability and confidence, because the code has already earned its place. It has cooled. It has been tested by time, by scrutiny, and by production workloads that depend on it behaving exactly as expected.

No approach to supply chain security is foolproof. But if someone tries to slip a backdoor into libcurl? An Ubuntu-based container likely never pulled that update, because nothing in the release plan required it. While teams chasing upstream HEAD are plating up code that's still burning hot, an intentional-update model is quietly unaffected.

That container might be running a version of curl from 2022,  not because Ubuntu is behind, but because the maintainers know exactly what that version does. And more importantly, what it doesn't. It cooled a long time ago. And cool code is predictable code.

Who ships the backdoor first?

Consider a scenario: a malicious patch gets merged upstream. It's subtle, it's signed, and it passes CI. As a result, it looks clean and is published to the world, all while being piping hot.

A nightly-rebuild pipeline pulls the latest upstream automatically. The image gets built, scanned, still zero CVEs, because it's brand new code. It's signed,  minimal, and it's perfectly malicious. Served at full temperature, no questions asked.

An intentional-update distribution like Ubuntu? The pinned version is older, but it is predictable and stable. Ubuntu maintainers let the code cool, and the poison revealed itself in the upstream before it ever reached the plate.

The problem of zero CVEs

Security scanners love to flag CVEs. Found one? You're in danger. Found zero? All clear.

But the world is more subtle: old code has more CVEs because people have studied it longer, while new code has zero CVEs because no one has examined it yet. For new code, zero CVEs doesn't mean secure,  it means unexamined. It means the code is too fresh for anyone to know what's inside.

If you're rebuilding nightly from upstream, you're pulling in code before it has even had a chance to be scrutinized. You're signing first and asking questions later. You're tasting the dish before it had time to cool.

Real security is earned slowly

Security isn't just a scan result. It's a discipline, and discipline requires  deliberate restraint. In other words, caution before adopting upstream code, discipline in changing what already works, and skepticism in extending trust.

Ubuntu's approach assumes upstream can be wrong, might be hasty, and may even be compromised. So the Ubuntu maintainers curate. They taste first, serve later. They let code cool before it leaves the kitchen, and they never serve anything they haven't inspected themselves.

The nightly-rebuild model bets on minimalism, transparency, and freshness, until freshness becomes a liability. Until "hot off the press" means "too hot to trust."

When freshness becomes a liability

The practice that makes containers "clean", nightly rebuilds from upstream,  is the same mechanism that pulls a backdoor in. The practice that makes containers seem  frozen, backported packages, is what keeps that door closed.

One kitchen grabs every ingredient the moment it arrives and cooks immediately. The other inspects, waits, and only uses what it already knows is safe.

When the next supply chain compromise hits a core system library, it's worth asking: which of these two approaches will ship you the malware and which one will help you avoid it all together?

Rebuilding is not verification

You can rebuild every package, scan every layer, and sign every artifact. But if you don't control the intent of the code, if you don't know where it came from, why it changed, or who slipped something into the diff, you're just rebuilding someone else's malware, but faster and with better infrastructure.

Rebuilding is replication. It's only useful when you already know what you're replicating. If you rebuild compromised code faithfully, you're not verifying anything. You're doing the attacker's CI for them. You're reheating someone else's poison and calling it a fresh meal.

A different kind of CVE

When everyone chases "zero known CVEs," we ignore wider risks. We stop asking, "Is this image vulnerable?" We need to start asking, "Is this image too trusting?"

CVE counts are a lagging indicator. The breach arrives before the scanner lights up. And the real vulnerability isn't the package,  it's the philosophy. It's the assumption that upstream is always safe to consume the moment it's published.

Conclusion

This isn't about any single vendor or project. It's about how the industry treats trust and temperature.

The intentional-update model assumes upstream can't always be trusted, so it moves deliberately. It lets code cool. The nightly-rebuild model assumes upstream is trustworthy and must be kept current, so it moves constantly. It serves everything hot.

Sometimes the most secure component in your pipeline is the one you haven't touched in eighteen months. Not because it's forgotten, but because it's had time to cool, and it earned the right to stay.

In software supply chain security, the best code isn't always the freshest. It's the code that cooled long enough for the truth to surface. So let your code cool. It tastes better anyways. 

Zero CVEs doesn't mean secure. It means unexamined. New code has zero CVEs because no one has studied it yet, and if you're rebuilding nightly from upstream, you're signing first and asking questions later. In software supply chain security, the freshest code isn't always the safest. Sometimes the most secure component in your pipeline is the one you haven't touched in eighteen months.


Source: https://ubuntu.com//blog/hot-code-burns Mar 23, 2026, 03:54 PM
#25
Ubuntu News / GNOME 50 dropped support for ...
Last post by tim - Apr 01, 2026, 02:01 PM
GNOME 50 dropped support for accessing Google Drive files

If you're used to accessing your Google Drive in the Nautilus file manager, a heads-up that the feature is no longer available in GNOME 50, which is the desktop version the upcoming Ubuntu 26.04 LTS uses. While GNOME Online Accounts (GOA) integration continues to allow you to sign in to your Google account to enable supported apps to access your contacts, mail and calendar data securely, the toggle to give access to files is now gone. It's that toggle that allows you to remotely mount your Google Drive in Nautilus' sidebar. If you installed the Ubuntu 26.04 LTS beta you [...]

You're reading GNOME 50 dropped support for accessing Google Drive files , a blog post from OMG! Ubuntu . Do not reproduce elsewhere without permission.


Categories: News, GNOME 50, google drive, Ubuntu 26.04 LTS
Source: https://www.omgubuntu.co.uk/2026/03/google-drive-not-working-nautilus-ubuntu-26-04 Mar 31, 2026, 02:13 AM
#26
Ubuntu Blog / Canonical joins the Rust Foun...
Last post by tim - Apr 01, 2026, 02:01 PM
Canonical joins the Rust Foundation as a Gold Member

Canonical's Gold-level investment in the Rust Foundation supports the long-term health of the Rust programming language and highlights its growing role in building resilient systems on Ubuntu and beyond.


AMSTERDAM, THE NETHERLANDS — March 23, 2026 (Open Source SecurityCon , KubeCon Europe 2026 ) — Today Canonical  announced that it has joined The Rust Foundation as a Gold Member. The Rust Foundation  is a nonprofit dedicated to advancing the performance, safety, and sustainability of the Rust programming language.

Rust Foundation members  play a critical role in supporting the governance, infrastructure, and long-term health of Rust. As a Gold Member, Canonical is making a significant investment in that work while supporting the growing use of Rust in production systems.

"Rust has become a foundational technology for building safe and reliable systems, and its continued success depends on strong collaboration between the open source community and the organizations bringing it into production," said Dr. Rebecca Rumbul, Executive Director and CEO of the Rust Foundation. "Canonical joining the Rust Foundation as a Gold Member is an important signal of Rust's growing role in large-scale systems."

Through its broad work maintaining open source software, Canonical provides long-term security maintenance and support for hundreds of thousands of packages used by developers and organizations worldwide. Canonical's work with Rust starts with providing an up-to-date Rust toolchain for the Ubuntu software repositories, but extends to crafting a first-class Rust developer experience on Ubuntu. Ubuntu recently replaced core system components such as the coreutils and sudo with Rust implementations to bolster the resilience of the operating system and cloud platforms it underpins. Through its participation in the Rust Foundation, Canonical is helping support the continued development and stewardship of Rust.

"As the publisher of Ubuntu, we understand the critical role systems software plays in modern infrastructure, and we see Rust as one of the most important tools for building it securely and reliably. Joining the Rust Foundation at the Gold level allows us to engage more directly in language and ecosystem governance, while continuing to improve the developer experience for Rust on Ubuntu." said Jon Seager, VP Engineering at Canonical. "Of particular interest to Canonical is the security story behind the Rust package registry, crates.io, and minimizing the number of potentially unknown dependencies required to implement core concerns such as async support, HTTP handling, and cryptography – especially in regulated environments."

Canonical joins a growing group of organizations supporting the Rust Foundation's mission to steward the Rust programming language and ensure its long-term sustainability. Through collaboration between industry leaders and the open source community, the Foundation works to strengthen the infrastructure and resources that allow Rust to thrive. Other organizations interested in becoming a Rust Foundation member can learn more at rustfoundation.org/get-involved

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support, and services. Its portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments, and home users, Canonical delivers trusted open source for everyone. Learn more at https://canonical.com/  

About the Rust Foundation

The Rust Foundation is an independent nonprofit organization dedicated to the safety, security, and sustainability of the Rust programming language and the people who use it. Through partnerships with corporate members and the open source community, the Foundation stewards the long-term health of Rust by investing in its maintainers, infrastructure, security, interoperability, and governance. Learn more at https://rustfoundation.org

Canonical's Gold-level investment in the Rust Foundation supports the long-term health of the Rust programming language and highlights its growing role in building resilient systems on Ubuntu and beyond. AMSTERDAM, THE NETHERLANDS — March 23, 2026 (Open Source SecurityCon, KubeCon Europe 2026) — Today Canonical announced that it has joined The Rust Foundation as a [...]


Categories: Developer Tools, Ubuntu, Ubuntu Desktop, Ubuntu Server
Source: https://ubuntu.com//blog/canonical-joins-the-rust-foundation-as-a-gold-member Mar 23, 2026, 11:15 AM
#27
Ubuntu News / Ubuntu MATE’s founder is step...
Last post by tim - Apr 01, 2026, 02:01 PM
Ubuntu MATE's founder is stepping back after 12 years

Ubuntu MATE is looking for a new maintainer, with current project lead Martin Wimpress revealing he no longer has the 'passion' for the project he once did – nor the time, it seems. Wimpress created Ubuntu MATE back in 2014, pairing Ubuntu with the traditional MATE desktop, initially a fork of the old GNOME 2 codebase and layout but now very much its own thing. Ubuntu MATE was made an official Ubuntu flavour in 2015, and its first official long-term support (LTS) release arrived the following year. There will be no Ubuntu MATE 26.04 LTS release, however, as it did [...]

You're reading Ubuntu MATE's founder is stepping back after 12 years , a blog post from OMG! Ubuntu . Do not reproduce elsewhere without permission.


Categories: News, martin wimpress, Ubuntu 26.04 LTS, Ubuntu MATE
Source: https://www.omgubuntu.co.uk/2026/03/ubuntu-mate-needs-new-maintainer Mar 29, 2026, 09:46 PM
#28
Ubuntu News / Ubuntu 26.10 could drop btrfs...
Last post by tim - Apr 01, 2026, 02:01 PM
Ubuntu 26.10 could drop btrfs, ZFS and LUKS support from GRUB

Ubuntu engineers are debating ways to reduce the number of features present in the signed version of GRUB, the boot loader used on systems with Secure Boot enabled. Canonical engineer Julian Klode proposes dropping support for /boot on btrfs, HFS+, XFS and ZFS filesystems, alongside GRUB's JPEG and PNG image parsers, ahead of Ubuntu 26.10. Apple partition table support, LVM volume handling, all software RAID except RAID 1 and, more controversially, LUKS-encrypted /boot partitions are also on the chopping block. Many of these features are said to be 'inherited by Debian, but never tested in Ubuntu'. "The timing here is crucial", Klode [...]

You're reading Ubuntu 26.10 could drop btrfs, ZFS and LUKS support from GRUB , a blog post from OMG! Ubuntu . Do not reproduce elsewhere without permission.


Categories: News, encryption, GRUB, Secure Boot, security, Ubuntu 26.10
Source: https://www.omgubuntu.co.uk/2026/03/ubuntu-grub-secure-boot-luks-changes Mar 28, 2026, 05:09 AM
#29
Ubuntu News / Ubuntu 26.04 Beta is now avai...
Last post by tim - Apr 01, 2026, 02:01 PM
Ubuntu 26.04 Beta is now available to download

The beta release of Ubuntu 26.04 LTS 'Resolute Raccoon' is now available to download, a month ahead its planned stable release on 23 April, 2026. Ubuntu 26.04 LTS runs on the latest release candidate of Linux kernel 7.0 (yet to be released), includes the new GNOME 50 desktop release and adds a couple of new default apps, including a new system monitoring utility (Resources). Visual changes introduced include a set of colourful new folder icons, a fully opaque Ubuntu Dock, a new default wallpaper and, albeit a little harder to spot, a new boot spinner animation that plays during system [...]

You're reading Ubuntu 26.04 Beta is now available to download , a blog post from OMG! Ubuntu . Do not reproduce elsewhere without permission.


Categories: News, Ubuntu 26.04 LTS
Source: https://www.omgubuntu.co.uk/2026/03/ubuntu-26-04-beta-is-now-available-to-download Mar 27, 2026, 01:17 AM
#30
Ubuntu News / Ubuntu’s App Center now lets ...
Last post by tim - Apr 01, 2026, 02:01 PM
Ubuntu's App Center now lets you manage Deb packages

Ubuntu's App Center software tool makes it easier to manage and update Deb software in its latest update – and nets a few extra options for snaps, too. The changes are part of Canonical's goal of making App Center, first introduced in Ubuntu 23.10, the epicentre (I'm sorry) for software management on Ubuntu, both Snap and Debian-based packages. A recent update to App Center (in Ubuntu 26.04; may come to earlier versions too) adds support for showing and managing Debian packages installed on your system from the Ubuntu repos, using PackageKit and Appstream on the backend. Previously only snaps were [...]

You're reading Ubuntu's App Center now lets you manage Deb packages , a blog post from OMG! Ubuntu . Do not reproduce elsewhere without permission.


Categories: News, App Center, package managers, snap store, Ubuntu 26.04 LTS
Source: https://www.omgubuntu.co.uk/2026/03/ubuntu-app-center-deb-management Mar 26, 2026, 05:17 PM