Episode 458 Show Notes

Welcome to mintCast

the Podcast by the Linux Mint Community for All Users of Linux

This is Episode 458!

This is Episode 458.5!

Recorded on Sunday, March 30, 2025.

getting hailed on I’m Joe; can’t get rid of me, I’m Moss; Another year older, I’m Dale

— Play Standard Intro —

  • First up in the news: New GIMP, Debian comes to a RISC-V tablet, Google explains why the are putting Terminal on Android, Asahi Linux loses another top dev, Plex goes for the gold – yours, meet EU OS, Kernel 6.14 is released, Gnome 48 released, new GRUB updates, AerynOS is released with GNOME 48;
  • In security and privacy: “MyTerms” wants to let the user dictate privacy;
  • Then in our Wanderings: Moss plays Musical Tablets, Joe Moxes the Prox, Dale has a burpday, Majid is on holiday and Bill is off truckin’ somewhere…
  • In our Innards section: Dale takes us through Mobile Networks
  • In Bodhi Corner, Moss covers new translations and work on the next version.
  • And finally, the feedback and a couple of suggestions
  • Please remember if you want to follow along with our discussions, the full show notes for this episode are linked in the show’s description at mintcast.org/show-notes

— Play News Transition Bumper —

The News

20 minutes

  • GIMP 3.0 Image Editor Is Now Available for Download joe
    • from 9to5 Linux
    • The development team behind the popular GIMP open-source image editing software published today the final build of the highly anticipated GIMP 3.0 release. Let’s take a look at the new features and improvements!
    • Highlights of GIMP 3.0 include a much-refined interface written in GTK3 that allows the use of the mouse scroll wheel to flip through different dockable dialogue tabs, along with a new slash screen and logo, as well as improvements to the legacy icon theme to look great on HiDPI screens.
    • GIMP 3.0 also brings massive improvements to color management, a stable public API to allow porting of plug-ins and scripts from the GIMP 2.10 series, support for loading layers from TIFF files saved in the Autodesk Sketchbook format, and support for 64 bits per pixel images for the BMP format.
    • GIMP 3.0 also brings improvements to non-destructive editing by introducing an optional “Merge Filters” checkbox at the bottom of NDE filters that merges down the filter immediately after it’s committed, along with non-destructive filters on layer groups and the implementation of storing version of filters in GIMP’s XCF project files.
    • Among other noteworthy changes, the GEGL and babl components have been updated with new features and many improvements, such as Inner Glow, Bevel, and GEGL Styles filters, some plugins saw small enhancements, and it’s now possible to export images with different settings while leaving the original image unchanged.
    • There’s also a new PDB call that allows Script-Fu writers to use labels to specify filter properties, a brand new named-argument syntax, support for loading 16-bits-per-channel LAB PSD files, support for loading DDS images with BC7 support, early-binding CMYK support, and support for PSB and JPEG-XL image formats.
    • On top of that, GIMP 3.0 introduces new auto-expanding layer boundary and snapping options, an updated search pop-up to show the menu path for all entries while making individual filters searchable, a revamped alignment tool, and support for “layer sets”, replacing the older concept of linked layers.
    • GIMP 3.0 is available for installation as a Flatpak app directly from Flathub. If you’re using GNOME or KDE Plasma, double-click the flatpakref file, and it will open in the GNOME Software or Plasma Discover graphical package manager, or you can install it via the command line.
    • You can also download GIMP 3.0 as a universal AppImage bundle from the official website, which you can run on virtually any GNU/Linux distribution without installing anything, or as a source tarball if you fancy compiling it from sources. Check out the release notes for more details.
  • A Debian-Based Distro and Hardware Upgrades Come to the PINE64 RISC-V Tablet Moss
    • from It’s FOSS
    • PINE64 has quietly upgraded the PineTab-V tablet, with it now shipping with a new Debian-based Linux distribution developed by StarFive and some new hardware. This new addition to the PINE64 tablet family kicks things up a notch in terms of both utility and software compatibility.
    • Earlier, the PineTab-V didn’t come installed with an operating system and was only available for developers and early adopters back in 2023.
    • In terms of hardware, this updated model adds a new accelerometer, a fix for a slow charging issue, an LED status indicator light, and a unique identifier for the PineTab-V in the EEPROM (Electrically Erasable Programmable Read-Only Memory).
    • The last bit helps the bootloader, OS, or firmware identify that the device is a PineTab-V, allowing for proper hardware detection and configuration.
    • The article goes on to list the specs and purchase information for the PineTab-V; see the link in the Show Notes.
  • Google Explains Why It Added a Linux VM to Pixel Phones Moss
    • from HowToGeek
    • The Linux Terminal app was an unexpected but welcome addition to Android. Now, a Google developer confirms that the Linux Terminal isn’t intended to provide a desktop experience. It simply brings Linux apps into the Android ecosystem.
    • Google quietly included the Linux Terminal app with the March 2025 Pixel Drop. It wasn’t accompanied by a big announcement or blog post, and its pre-release development mostly flew under the radar, so Android fans were quick to make assumptions about its purpose.
    • One popular assumption—that this Linux VM would power the next-gen Android desktop environment—turned out to be false. Someone actually requested Linux desktop functionality on the Google Issue Tracker in January, arguing that Android should automatically enter a virtualized Linux desktop environment when plugged into an external monitor. A Google developer who initally overlooked the ticket provided a belated response on March 10th, explaining that “the main purpose of this Linux Terminal feature is to bring more apps (Linux apps/tools/games) into Android.”
    • The developer goes on to say that Linux Terminal will not be bundled with a desktop management system—it won’t be the powerhouse behind Android’s desktop functionality. Instead, Google will continue developing the native Android desktop environment, which should provide a better experience for the average user.
      • “The main purpose of this Linux terminal feature is to bring more apps (Linux apps/tools/games) into Android, but NOT to bring yet another desktop environment … We think it would in general be bad to present multiple options for the window management on a single device.”
    • This isn’t to say that Linux desktops are banned from Android. If you want a Linux desktop GUI, you’re free to install XFCE, GNOME, or another desktop management system in the VM. Hardware manufactures can ship their smartphones or tablets with a Linux desktop, too, meaning that we could see some interesting new devices in the next year or so.
    • But, by default, the Linux Terminal is just a simple command line. It’s Debian-based, so experienced Linux users shouldn’t have much trouble operating it, though it’s still missing some functionality, such as hardware acceleration and audio support. However, as Android Authority demonstrates, it’s already possible to run games like DOOM with a bit of tweaking.
      • “This however doesn’t mean that we prohibit the installation of any Linux desktop management system (xfce, gnome, etc.) in the VM. I just mean that those won’t be provided as the default experience as you would expect.”
    • The average Android user probably can’t even tell you what Linux is. So, if Google wants to bring Linux apps to the Android platform, it needs to find an intuitive way to run the applications without exposing users to a CLI. It should also develop a Linux app distribution method (ideally a Play Store integration) and provide an option to add Linux app shortcuts to the home screen.
    • Unfortunately, I don’t know if Google will actually implement these user-friendly features. The company doesn’t do much hand-holding when it comes to Linux on Chrome OS, so the same may be true for Android.
  • Asahi Linux loses another prominent dev as GPU guru calls it quits Dale
    • from The Register
    • Another developer has dropped out of Asahi Linux, the project to get Linux up and running on Apple silicon.
    • On Tuesday, a developer going by “Asahi Lina” announced she would be pausing work on Apple GPU drivers indefinitely. Asahi Lina posted on Bluesky: “I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem.”
    • The Asahi Linux project added the developer to the “Past major contributors” list and described Lina as a “GPU kernel sourceress.”
    • “Lina joined the team to reverse engineer the M1 GPU kernel interface, and found herself writing the world’s first Rust Linux GPU kernel driver. Outside of GPUs, she sometimes hacks on open source VTuber tooling and infrastructure.”
    • A VTuber is an online entertainer who uses a virtual avatar instead of a webcam to present on video streaming platforms. It’s an effective way to conceal one’s identity.
    • Lina’s departure follows that of Hector Martin, Asahi Linux project lead, who resigned in February, citing developer burnout, demanding users, and Linus Torvalds’s handling of the integration of Rust code into the open source kernel.
    • The departure will cause headaches for Linux graphics support on Apple Silicon. Alongside Alyssa Rosenzweig, Asahi Lina had been one of the most prominent developers working on the Apple Linux GPU graphics stack.
    • When Martin resigned from the project, a list of seven contributors was compiled to manage the project going forward. Asahi Lina was not on that list.
    • Asahi Linux aims to port Linux to Apple Silicon Macs. It has made solid progress over the years and in December 2024 pushed out Fedora Asahi Remix 41, which, in the experience of this Mac Mini M1 user, worked admirably well. Considering it was only 2022 when the first Asahi Linux Alpha release turned up, work has been rapid, and the installation process is now far less daunting.
    • Despite the changes within the Asahi Linux project, Fedora Asahi Remix 42 Beta was announced this week, with the non-beta version due in approximately a month, alongside the overall Fedora Linux 42 release.
    • One notable change for Apple Silicon users will be the integration of FEX in Fedora Linux. This should make the running of x86 and x86-64 binaries via emulation easier.
  • Plex is cracking down on its version of password sharing, and it’s coming with a price hike Dale
    • from AndroidPolice
    • When it comes to streaming services, I can’t think of one more beloved than Plex. Sure, it’s not exactly a Netflix alternative, but whether you’re running your own home server or browsing the collection of someone else, it’s a great way to keep a large amount of media browsable in one space. That said, one of Plex’s main use cases — serving as a way for lots of people to browse a single shared server — is going through a pretty big change, and it’s bound to leave some customers frustrated.
    • Today’s news is actually sort of two announcements tied together, and it’s worth tackling both of them head on before considering how they might affect Plex fans (via TechCrunch). First and foremost, you’ll soon need to be a Plex Pass subscriber to use remote playback, which is likely how most people interact with the app. Starting on April 29th, unless you’re streaming on a local network, you’ll need some sort of paid Plex subscription to access your library.
    • Now, unsurprisingly, Plex has thought through what might happen if server owners are unwilling to pay. So, if the owner of a server you rely on does not pay for Plex Pass, you’ll be able to pay for a Remote Watch Pass for either $2 per month or $20 per year, which will reactivate remote streams for your account. Either way, though, someone’s going to have to pay beginning next month.
    • It’s very reminiscent of how Netflix handled password sharing with accounts over the past two years, albeit with a Plex server-based twist. Obviously, it’s not identical — you don’t typically share an actual password with Plex, but access to the server — but the end result is the same. Either the server owner or the server viewer will need to pay more for account access.
    • And speaking of paying more, that’s the other big Plex change. Timed not-so-perfectly with today’s remote viewing changes, Plex Pass is getting much more expensive in just over a month. On April 29th, monthly pricing will run users $7 per month (up from $5), annual pricing will jump to $70 (up from $40), with the lifetime pass now running fans $250 (up from a measly $120). Those are some big jumps for a platform that has stayed relatively unchanged with its pricing plan for a while, but thankfully, you have until April 29th to lock in on an annual or lifetime plan at those rates.
    • This is affecting all users, though, so if you want the cheapest possible plan, grabbing the lifetime subscription for $120 is bound to be the way to go.
    • If you’re feeling frustrated, Plex does have some good news today to go with the bad: Mobile unlock fees are dead. Previously, to stream content to Android or iOS devices, you needed to either pay a one-time activation fee or subscribe to Plex Pass to get a full experience; otherwise, your playback was capped after 60 seconds. That’ll go away when the new (and surprisingly well-received!) mobile Plex app goes live in the near future, which should help make those new fees feel a little more understandable.
  • EU OS Is a New Community-Led Linux Alternative for Europe’s Public Sector Moss
    • from Linuxiac
    • Here’s something interesting that caught my attention recently—a new community-led project called “EU OS” that plans to offer a free, Fedora-based Linux operating system specifically tailored for Europe’s public sector.
    • First and foremost, this initiative is still in its very early phases—the official project documentation includes a conspicuous warning that, at the moment, it is “a work in progress.” Simply put, you haven’t released anything yet — no install ISO, no alpha version, not even some technical details. Just an idea at this point.
    • It is designed as a Proof-of-Concept built upon Fedora Linux, complemented by the KDE Plasma desktop environment. According to the developers, countries could add a specialized “national layer,” regions could add their own enhancements, and individual organizations could fine-tune additional functionalities.
    • This layered approach is planned to allow administrators to combine shared core elements—including user management, device provisioning, software deployment, and data handling—with more targeted adjustments. As a result, developers and IT experts can focus on local priorities without losing sight of collective European interests.
    • Moreover, EU OS aims to embody the “public money, public code” principle, whereby software funded by the government is open for everyone to use, improve, and share. That philosophy is expected to spur innovations well beyond European borders, as no per-seat licensing fees and flexible standards attract fresh ideas from both the public and private spheres.
    • Looking ahead, the project plans to leverage CI workflows to build an “atomic operating system,” run installation trials on various hardware, and demonstrate real-world Proof-of-Concept among early-adopting organizations.
    • Meanwhile, supporters hope that the European Commission itself will step in by hosting EU OS on a dedicated platform and providing substantial backing.
    • So, let’s make it clear – even though the name EU OS might suggest a connection to the European Union (as well as a strong reference in the logo), it doesn’t actually seem to have any official ties to the EU organization itself. In my opinion, the name is a bit misleading, and it’s easy to see how people might mistake it for an official EU initiative – but it’s not.
    • One more thing about the logo—maybe it’s worth rethinking. The mouse cursors replacing the familiar stars kind of look like missiles being launched or fighters, at least to me. That imagery doesn’t really match the message the abbreviation “EU” is meant to convey.
    • Last but definitely not least, I find the choice of Fedora as the base a little surprising. Since Fedora is largely developed in the U.S. (by Red Hat, which IBM owns), it doesn’t exactly scream “European.” With a strong European option like (open)SUSE available, Fedora doesn’t strike me as the most obvious foundation for a project calling itself EU OS.
    • For more information, visit the project’s website.
  • Linux Kernel 6.14 Officially Released, This Is What’s New joe
    • from 9to5 Linux
    • Today, Linus Torvalds announced the release and general availability of Linux 6.14, the latest stable kernel version that introduces several new features and improvements, better hardware support, and more.
    • Highlights of Linux 6.14 include Btrfs RAID1 read balancing support, a new ntsync subsystem for Win NT synchronization primitives to boost game emulation with Wine, uncached buffered I/O support, and a new accelerator driver for the AMD XDNA Ryzen AI NPUs (Neural Processing Units).
    • Also new is DRM panic support for the AMDGPU driver, reflink and reverse-mapping support for the XFS real-time device, Intel Clearwater Forest server support, support for SELinux extended permissions, FUSE support for io_uring, a new fsnotify file pre-access event type, and a new cgroup controller for device memory.
    • It also brings core energy counter support for AMD CPUs, power supply extensions to allow adding properties to a power supply device from a separate driver, support for T-Head vector extensions for RISC-V architectures, and power management suspend/resume support for Raspberry Pi devices.
    • Other new features in Linux kernel 6.14 include KVM hypercall service support for usermode VMM for LoongArch, a new PCI error recovery status mechanism for IBM System/390, SRSO_USER_KERNEL_NO support for AMD hardware, and manual fan control support on Dell XPS 9370 laptops.
    • On top of that, Linux 6.14 adds support for a greater range of MBQ access sizes and deferred read/write support for SoundWire devices, ACPI support for Rockchip SFC controllers, support for Atmel SAM7G5 QuadSPI and KEBA SPI controllers, and support for Blaize BLZP1600 and SpacemiT K1 SoCs.
    • Also new is support for restartable sequences on the OpenRISC architecture, support for amd-pstate preferred core rankings, SHA512 support for signing kernel modules, support for allocating and freeing “frozen” pages, a new zpdesc memory descriptor, and new BPF kfuncs for disabling and restoring CPU interrupts.
    • Further enhancements to ALSA rawmidi and sequencer APIs for MIDI 2.0 have also been added, along with a new feature that promises to significantly reduce the duration of system suspend and resume transitions on some machines, as well as lazy preemption support for the PowerPC architecture.
    • There’s also support for large folios for tmpfs, compress-offload API extensions for ASRC (Asynchronous Sample Rate Converter) support, support for restartable sequences for OpenRISC, NFSv4.2+ attribute delegation, dynamic NFSv4.1 session slot table resizing, and improved support for Snapdragon X CPUs.
    • Some interesting networking improvements in Linux kernel 6.14 include IPsec support for IP-TFS/AggFrag encapsulation allowing aggregation and fragmentation of the inner IP, support for jumbo data packet transmission in RxRPC sockets, and phylib support for in-band capabilities negotiation.
    • Furthermore, there’s support for configuring a header-data-split threshold (HDS) value via ethtool, a unified and structured interface for reporting PHY statistics, support for ipv4-mapped IPv6 address clients in smc-r v2, as well as netlink notifications for multicast IPv4 and IPv6 address changes.
    • Of course, there are many new and updated drivers for better hardware support including a new driver for the SM8750 platform, MT8188 Mali-G57 MC3 support in the Panfrost driver, support for the Nacon Evol-X and Nacon Pro Compact Xbox One controllers, a new EDAC driver for Loongson SoCs, support for the Intel Touch host controller, and PCI Wacom device support.
    • Linux 6.14 also adds support for the SteelSeries Arctis 9 wireless gaming headset, a new Intel CRPS185 power supply PMBus client driver, support for optional CPU fan on AMD 600 motherboards, support for the ASUS TUF GAMING X670E PLUS motherboard, support for the 8BitDo controller, and support for Intel Xeon “Clearwater Forest” processors.
    • The list continues with a new cpufreq driver for Airoha SoCs, port filtering support for NVIDIA’s NVLINK-C2C Coresight PMU, support for Nacon Evol-X Xbox One controllers, support for Marvell Odyssey DDR and LLC-TAD PMUs, support for Nacon Pro Compact controllers, and support for Allwinner suinv F1C100s.
    • Last but not least, Linux kernel 6.14 brings support for Awinc AW88083, Realtek ALC5682I-VE sound chips, Focusrite Scarlett 4th Gen 16i16, 18i16, and 18i20 interfaces, and an unofficial Xbox 360 wireless receiver clone. It also brings more Rust updates for building the kernel using only stable Rust features.
    • You can download Linux kernel 6.14 right now from Linus Torvalds’ git tree or the kernel.org website if you fancy compiling it on your GNU/Linux distribution. However, I recommend waiting for the new Linux release to arrive in your distro’s stable software repositories before updating your kernel.
    • Now that Linux kernel 6.14 is out the door, the merge window opens for the next major kernel branch, Linux 6.15, which is expected at the end of May or early June 2025. Until then, a first Release Candidate (RC) development version will be available for public testing in two weeks, on April 6th.
  • GNOME 48 lands with performance boosts, new fonts, better accessibilityDale
    • from The Register
    • GNOME 48 is here, with some under-the-hood tweaks to improve performance even on low-end kit.
    • This release brings improved support both for high and low-end display tech, a new media player, revamped notifications handling, and more. It should run faster and more smoothly even on low-end computers and on low-powered GPUs, and use less memory while doing it. The Orca screenreader works better in the default Wayland session, and GNOME’s global default theme, Adwaita, now has new default fonts.
    • We looked at the beta back in February, so we’ll try not to go over the same features again. The release notes are clear, readable, and aimed at a general audience, so for a full rundown you should give them a look.
    • GNOME 48 showing the overview with two virtual desktops and the favourite apps in the Dash at the bottom
    • The release is codenamed Bengaluru after the city formerly known as Bangalore. The capital of Karnataka and such a major tech hub that it’s sometimes known as India’s Silicon Valley, the city was the venue for last year’s GNOME Asia conference.
    • Some of the bigger changes in version 48 are either under the hood, or less immediately obvious unless you’ve got an exceptionally keen eye for typography. Two that we’re particularly pleased to hear about are changes that help performance on low-end computers and improve access for people with visual disabilities.
    • At long last, support for dynamic triple buffering has made it to this release. We say “at last” because it has been repeatedly postponed and missed several prior releases. The feature helps the computer to keep up with the GPU and the screen. On slower machines, it should mean smoother animations and movement such as scrolling. On higher-end machines, it can improve the performance of HiDPI displays. With matching driver support, it can also tell GPUs when they need to run faster.
    • Less visible but still important are improvements to GNOME’s built-in JavaScript engine, GJS, which now uses less memory and less CPU, as well as the background file-indexer, the GTK toolkit, and the Files app. All should now be more responsive. Also, GNOME’s built-in screenreader Orca – which is also used with some other desktops, such as Elementary OS – has been upgraded and it now works better in Wayland.
    • Both these matter because as well as being the default desktop of all the big enterprise distros, GNOME is also part of Endless OS, a nonprofit-backed distro aimed at lower-end machines. Endless is trying to improve access to computers in developing economies, especially in Latin America. Notably, this includes people without always-on internet access, where one of its main rivals, Google’s ChromeOS and ChromeOS Flex, are less useful.
    • On the high end, if you’re lucky enough to have a High Dynamic Range (HDR) display, then GNOME now natively supports this. It needs to be manually enabled and isn’t compatible with many display subsystems yet, but it’s an important first step. A slight snag is that on some displays, this means software brightness control no longer works, so it must be simulated in software.
    • As we have covered before, the new global theme in GNOME 4x has caused controversy, even leading developers to implore Please don’t theme our apps. Now the Adwaita theme includes new default fonts both for the general UI as a whole, and a new monospace font that you’ll see in terminal windows and so on. Adwaita Sans and Adwaita Mono are based on the Inter and Iosevka typefaces respectively.
    • The Settings app has a new Wellbeing section with tools to help you remember to get off your computer now and then. It can track how much time you’ve spent using the machine, switch the display to grayscale, and other helpful features. This vulture can attest that having such features on even a very low-end smartwatch helps remind the meatsack that it needs to get up and move around occasionally.
    • There are a range of tweaks and updates to most of the various accessory programs that come with GNOME, but we talked about the changes there when we looked at the beta last month.
    • We gave the new version a spin both on hardware and in VirtualBox. It is quite demanding of your GPU, and we had to make some tweaks in both environments. We tested on the metal on a ThinkPad X301, dating back to 2008 and one of the lowest-end machines in The Reg FOSS desk’s test fleet. It has a Core 2 Duo SU9400 low-power CPU, with the chipset providing GMA 4500MHD graphics. Even so, GNOME 48 ran surprisingly well and felt smooth and responsive. Mostly we couldn’t spot the difference with the new fonts, although at smaller sizes we did feel that Adwaita Sans looked slightly faint and fuzzy.
    • GNOME 48 will be the default desktop environment in both Fedora 42 and Ubuntu 25.04, both of which are expected next month, and should also be included in Debian 13.
  • GRUB Bootloader Received 73 Patches To Fix A Variety Of Recent Security Issues Moss
    • from Phoronix
    • The GRUB bootloader saw a set of 73 patches last month for addressing a variety of security flaws that were discovered.
    • Flying under the radar until now was a set of 73 patches needed in February to address a number of security issues, several of which were issued CVEs for the potentially exploitable security woes.
    • While public one month and the patches were committed to the GRUB Git codebase, no new tagged GRUB version has yet to be published. In fact, no new GRUB releases since the GRUB 2.12 release already 15 months ago.
    • These GRUB security patches were only raised on my radar today with the GNU Boot 0.1 RC6 release. The new GNU Boot release candidate calls attention to the multiple security issues facing GRUB and thus they updated their included copy of GRUB with the necessary security patches. Among the GRUB security issues potentially impacting the GNU Boot users:
      • “Users having replaced the GNU Boot picture / logo with untrusted pictures could have been affected if the pictures they used were specially crafted to exploit a vulnerability in GRUB and take full control of the computer. In general it’s a good idea to avoid using untrusted pictures in GRUB or other boot software to limit such risks because software can have bugs (a similar issue also happened in a free software UEFI implementation).
      • Users having implemented various user-respecting flavor(s) of secure-boot, either by using GPG signatures and/or by using a GRUB password combined with full disk encryption are also affected as these security vulnerabilities could enable people to bypass secure-boot schemes.
      • In addition there are also security vulnerabilities in file systems, which also enable execution of code. When booting, GRUB has to load files (like the Linux or linux-libre kernel) that are executed anyway. But in some cases, it could still affect users.
      • This could happen when trying to boot from an USB key, and also having another USB key that has a file system that was crafted to take control of the computer.”
    • The 73 patches can be found on the GRUB mailing list along with more details on the issues for those interested. The issues range from out-of-bounds writes to integer overflows, the dump command now being in lockdwon mode when using Secure Boot, and other issues.
    • The only bit of good news is that the “major Linux distros carry or will carry soon oneform or another of these patches” so the likelihood of exploiting these issues at scale is hopefully minimal. Today’s GNU Boot announcement does note that some free software Linux distributions endorsed by the FSF are not comfortable in using GRUB Git snapshots and thus still vulnerable:
      • “For most 100% free distributions, using GRUB from git would be a significant effort in testing and/or in packaging.
      • We notified Trisquel, Parabola and Guix and the ones who responded are not comfortable with updating GRUB to a not-yet released git revision. Though in the case of Parabola nothing prevent adding a new grub-git package that has no known vulnerabilities in addition to the existing grub package, so patches for that are welcome.”
    • Hopefully GRUB will be able to improve their release process as a side effect of these issues.
  • AerynOS 2025.03 Released with GNOME 48, Mesa 25, and Linux Kernel 6.13.8Dale
    • from 9to5 Linux
    • AerynOS 2025.03 is out today as the first release of this independent Linux distribution to ship with the latest and greatest GNOME 48 desktop environment in a live ISO image that anyone can test on their PCs.
    • That’s right, AerynOS is one of the first distros to package and deliver the recently released GNOME 48 desktop environment to its users. GNOME 48 is a huge update featuring HDR support, dynamic triple buffering, Wayland color management protocol, a Wellbeing feature, battery charge limiting, and more.
    • While still considered alpha quality, AerynOS 2025.03 is powered by Linux kernel 6.13.8 (with Linux kernel 6.14 on the pipelines) and features the latest and greatest Mesa 25.0.2 open-source graphics stack. The updated toolchain consists of LLVM 19.1.7, Vulkan SDK 1.4.309.0, and many of the latest tools.
    • Another interesting change in this release is support for os-info in the moss package manager to generate the os-release file. Ikey Doherty says that while os-release and lsb_release provide very primitive identification and metadata, os-info is designed to provide compatibility with those formats while being far more expressive.
      • “Previously we had moss generate the /usr/lib/os-release file on demand using compiled in defaults, which was somewhat inflexible. Now, we ship a JSON file (/usr/lib/os-info.json) containing a description of the OS, the composition of technologies, and the capabilities,” explains developer Ikey Doherty.
    • As previously mentioned, AerynOS promises exciting new features like automating package updates, easier management of rollbacks, disk handling with Rust, fractional scaling enabled by default, and a revamped system installer with support for full disk wipe and dynamic disk partitioning schemes.
    • All these and more will be implemented in a future release. Until then, you can download AerynOS 2025.03 right now from the official website if you want to give this independent distro a try on your personal computer. However, while AerynOS feels like a very solid daily runner, please keep in mind that it’s currently considered alpha quality.

— Play Security Transition Bumper —

Security and Privacy

10 minutes

  • MyTerms” wants to become the new way we dictate our privacy on the web joe
    • from ArsTechnica
    • Author, journalist, and long-time Internet freedom advocate Doc Searls wants us to stop asking for privacy from websites, services, and AI and start telling these things what we will and will not accept.
    • Draft standard IEEE P7012, which Searls has nicknamed “MyTerms” (akin to “Wi-Fi”), is a Draft Standard for Machine Readable Personal Privacy Terms. Searls writes on his blog that MyTerms has been in the works since 2017, and a fully readable version should be ready later this year, following conference presentations at VRM Day and the Internet Identity Workshop (IIW).
    • The big concept is that you are the first party to each contract you have with online things. The websites, apps, or services you visit are the second party. You arrive with either a pre-set contract you prefer on your device or pick one when you arrive, and it tells the site what information you will and will not offer up for access to content or services. Presumably, a site can work with that contract, modify itself to meet the terms, or perhaps tell you it can’t do that.
    • The easiest way to set your standards, at first, would be to pick something from Customer Commons, which is modeled on the copyleft concept of Creative Commons. Right now, there’s just one example up: #NoStalking, which allows for ads but not with data usable for “targeted advertising or tracking beyond the primary service for which you provided it.” Ad blocking is not addressed in Searls’ post or IEEE summary, but it would presumably exist outside MyTerms—even if MyTerms seems to want to reduce the need for ad blocking.
    • Searls and his group are putting up the standards and letting the browsers, extension-makers, website managers, mobile platforms, and other pieces of the tech stack craft the tools. So long as the human is the first party to a contract, the digital thing is the second, a “disinterested non-profit” provides the roster of agreements, and both sides keep records of what they agreed to, the function can take whatever shape the Internet decides.
    • Searls’ and his group’s standard is a plea for a sensible alternative to the modern reality of accessing web information. It asks us to stop pretending that we’re all reading agreements stuffed full with opaque language, agreeing to thousands upon thousands of words’ worth of terms every day and willfully offering up information about us. And, of course, it makes people ask if it is due to become another version of Do Not Track.
    • Do Not Track was a request, while MyTerms is inherently a demand. Websites and services could, of course, simply refuse to show or provide content and data if a MyTerms agent is present, or they could ask or demand that people set the least restrictive terms.
    • There is nothing inherently wrong with setting up a user-first privacy scheme and pushing for sites and software to do the right thing and abide by it. People may choose to stick to search engines and sites that agree to MyTerms. Time will tell if MyTerms can gain the kind of leverage Searls is aiming for.

— Play Wanderings Transition Bumper —

Bi-Weekly Wanderings

30 minutes (~5-8 mins each)

  • Joe
    • I got the ThinkCentre M700 Tiny that Moss sent me set up as a Proxmox node. And attached it as a second node in my network.
    • I installed Proxmox and then I set up another Mint VM and was able to pass both the HDMI with all functionality and the built in audio to the VM as well as using PCI passthrough for the USB ports on the device.
    • I also set the VM to autostart at boot. This makes it so that even if the whole device is brought down and powered back up it will seem like just a normal computer running Mint instead of a Proxmox node. That way my family will not have any issues using it until something messes up.
    • Now I can use the rest of the machine’s resources for other things, while anybody using the machine will just see it as a regular set top box. Which should make it family approved.
    • I also set up an LXC for Wireguard, since this will be in my living room next to the modem. After setting it up in the living room I tested out the speeds versus the router and while the router itself was getting around 150 mbps the Proxmox LXC was getting between 180 and 200 Mbps.
    • I assume that is due to the overhead involved and the fact that the router has less headroom than the ThinkCentre. So now the ThinkCentre is my primary Wireguard and my router will be my backup.
    • I also used Proxmox helper scripts to remake my Pihole instance on it just to get it moved and out of the way onto a device that will probably get restarted less than my rack in the garage.
    • Still need to get out to the shipping place and mail a couple of things to Moss. He mentioned on one of the shows that he needs a sound bar and I have a couple of those sitting around that I got second hand and didn’t like because I had to turn them back on every time I walked back to my computer. As in they shut off after x amount of time of non use. That along with the stand that I printed for him need to go out but I just haven’t put it all together yet.
    • I also 3D printed a new hinge for the pair of Skullcandy Crushers that I gave my son. It was a design that I was never very happy with though so I did go looking for another design that would work better since I also have my set of modified Crushers that need to be fixed. Plus it is always good to have extras on hand as they will break again.
    • While printing this I noticed that I was I getting some issues with Cura. Not of the parts were connected as they should be and I don’t know why. But the preview was showing some of the supports printing in mid air.
    • plus my prints have not been coming out right in a while even at slower speeds. So I have made the switch to Prusaslicer and that seems to be working fine.
  • Moss
    • I had a good time sharing music with friends in South Carolina last weekend. The trip wasn’t too bad, either on US 25 going there or I-40 coming back. I won’t comment much on I-26 other than that it has been under construction all the years I’ve driven it (which is since 1995). I performed the first live debut of my latest song, “Old”. Audio is available on request.
    • We finally sold our 2011 Chevy Cruze, and got more than the lowball offers. We paid off what was left on our Hyundai, and we are out of debt, with a couple thousand more in savings.
    • My sub job for the high school’s computer systems class is coming up on April 10. I haven’t had any jobs this biweekly.
    • I have yet to get any offers on my PineTab 2. I’d like to get a Surface Pro 4th or 5th Generation for it. The listener who sold me the PT2 has a 3rd gen Surface, but I doubt he really wanted the PT2 back as he has sold a couple since. It works fine, running Plasma Mobile on Debian 12, but weighs double that of my Fire HD8. I can’t put Linux on the Fire tablet without bricking it, so I would like the Surface for a Linux installation. I also have a Fire Oasis tablet for sale, featuring eInk, and could also sell the Fire HD8; I expect to get a Kobo Color eInk by Yule.
    • We have deregistered our Amazon Echo devices and unplugged them. We also have made certain we do not have Alexa app active on any of our devices. This is due to the fact that Amazon no longer lets us have any control over what is recorded and saved by Amazon assuming we ever did. Can we sell them? Who knows. Make me an offer. We have a 3rd Gen Echo with a remote, and a 1st Gen Echo Dot.
  • Dale
    • I took my System76 Pangolin out of retirement because it was my newest laptop. I decided to put it back into retirement after using it for a few months. The left speaker doesn’t work. I was given a quote by System76 of $19 of which I thought was a bit high. Well until they quoted me $30 to $40 for shipping. I deleted the email so I can’t go back and look. Needless to say I didn’t order it. I thought that was a bit ridiculous. They apparently needed to order it from Clevo.
    • I had one of the Cosmic alphas on it to take a look. After an update a month or so later, my WiFi stopped working. Granted it is an alpha, however, it is their hardware and it is their operating system. So I put Fedora KDE on it. It was fine until the screen started blinking when I would watch a video or have an application in full screen. The final straw was my audio occasionally wouldn’t work. I am not sure if that is a Fedora issue or a hardware issue. I haven’t had time to try another distro to verify it yet.
    • While looking through eBay, I found a 2021 Lenovo T15 i5 11th gen with 8 GB of RAM, a 250 GB SSD, and a 15.6” IPS screen with Intel Iris Xe Graphics. It was $300 so I bought it as a birthday gift. It is almost like new with minor marks on the lid and scratches around the USB ports.
    • My main tower server went offline 2 weeks before arriving home. I am suspecting a bad PSU. It has a MSi Mag PSU. I had doubts when I bought it 15 months ago. I usually buy Corsair. I have some that are 8 or so years old.
    • I haven’t had time to look at either one due my birthday the past week and writing the Mobile Networks article for the Innards.
    • The last time I was home I helped a friend install Linux Mint on one of his computers. He finally got annoyed enough with Windows 11 to try Linux. He had tried it years ago but it was back when there was less hardware support. He was amazed with the current state of Linux now. He knows how to use sudo and find and install packages. After a few hours on the phone during the past month. He is now to the point where he doesn’t ask for help. Unless it is something he couldn’t figure out on his own. He is an amateur radio operator and has been getting into the digital modes for the HF frequencies. He now has both of his HF radios working with Linux Mint. He can control the radio from the computer and use the digital modes. He was happy that the software he used on windows was available on Linux.

— Play Innards Transition Bumper —

Linux Innards

30 minutes (~5-8 minutes each)

  • Mobile Networks
    • The idea of wireless communication dates back further than you might think. Concurrent developments on both sides of the Atlantic were occurring in the early 1900’s. Though I am going to focus mostly on the US. So that the Innards section doesn’t get too long.
    • In the late 1930s early and early 1940s AT&T’s Bell Laboratory developed an internal mobile phone network using VHF and UHF police radios and modified them to use telephone handsets instead of microphones. This way the speaker and microphone was in the handset just like existing landline phones. They also established the Emergency RadioTelephone Service in New York. After a successful test they announced the General Mobile Radiotelephone service on the 29th of June, 1945. AT&T applied for FCC authority in Baltimore, Chicago, Milwaukee, Philadelphia, Pittsburgh, St. Louis, Washington D.C., Columbus Ohio, Denver, Houston, New York City, and Salt Lake City. It was later referred to as MTS (Mobile Telephone Service).
    • The Bell Systems which was part of AT&T and the FCC defined two forms of service. One was ‘Highway’ and the other was ‘Urban’. The Highway service was intended for use outside of a city by trucks and vessels using inland water ways. They used the VHF Low frequencies of 35 MHz to receive and 43 MHz to transmit using 12 channels. The Urban service was intended for use in the radius of the city and could be used by anyone. It used the VHF High frequencies of 152 MHz for transmit and 158 MHz to receive using 6 channels. Each radio was given a specific ID made up of a maximum of 5 digits. The decoding equipment would listen for alternating pulses of 600 and 1500 Hz tones. When a corresponding tone was received a stepper motor would advance for each pulse. If the pulses matched the numbers assigned to the radio, a bell would ring signaling an incoming call. I will include links in ‘Check this out’ for more details on this.
    • On the 17th June of 1946 in St. Louis, Missouri the first Urban system went on the air and on the 28th of August in Green Bay, Wisconsin the first Highway system went on the air.
    • By 1948, the MTS Urban service was available in 60 cities in the US and Canada. It had 4,000 customers that placed about 117,000 calls per week. The MTS Highway service was available between 85 cities with 1900 customers making 36,000 calls per month.
    • This was a half duplex system. It used push to talk. When you were speaking, you couldn’t listen and when you were listening, you couldn’t speak. In order to make a call, you would listen to one of the available channels. If no one was using it, you could press the button to transmit. You would say your ID and city of registry and the number they want to call. A radio operator at the base station’s switchboard would patch the radio into the landline phone system manually. Very similar to how a person would ask the operator using a landline phone. A landline customer could call a mobile phone customer by calling the operator. Then asking for the mobile phone ID of the person they wish to call.
    • The transmitter and receiver were mounted in the trunk and weighed 80 lbs/36 kg.
    • The popularity of MTS grew so did the amount of available channels.
    • MTS was mostly used by companies as the cost was prohibitively expensive to the average person. In 1940’s/1950’s dollars it was $15 per month and between 30 and 40 cents per minute. In 2025 dollars that would be $198 per month and between $4 and $5 per minute.
    • IMTS “Improved Mobile Telephone Service” was released in 1965. It was a much needed upgrade over MTS. IMTS added UHF frequencies of 454 – 460 MHz with 12 channels. Customers could also dial their own numbers instead of asking an operator to place the call for them. They used the same rotary dial a landline customer would use. Another improvement was duplex communication. Meaning both sides of the call could talk at once. No need for a push to talk switch. The size and weight of the equipment was also reduced. Due to regulations, financial, and other reasons AT&T limited service to 40,000 customers. In some areas thousands of customers would share 12 radio channels. This would lead to 30 minute waits to make a call.
    • IMTS was also used on passenger rail in separate phone booths.
    • The IMTS system was expanded and upgraded throughout the years. More companies were making mobile phones and their features improved with each generation. I will include a link in the “Check this out” section with pictures of the many iterations of phones from MTS through the IMTS.
    • IMTS was used well into the 1990s in many areas of the country. Though it was less favored once cellular service was created.
    • A less popular service was RCC “Radio Common Carrier”. They were companies not part of the Bell System. It used similar though not compatible UHF and VHF frequencies that IMTS used. They started their own niche offering a pager service. Eventually they stopped focusing on phone service because competing with the Bell System wasn’t cost effective. Their pager service is still in use today in many parts of the country.
    • In the USSR, Leonid Kupriyanovich of Moscow developed a few experimental pocket-sized radios in 1957-1961. They were 70 g in weight and could fit in a person’s hand. In 1963 the Altai mobile telephone system was introduced in the USSR. It was a radiotelephone that used a fully automated system using UHF/VHF transceivers. It was able to connect to the existing landline phone system. The initial use was for government and emergency services and was eventually made available to the public. Though many couldn’t afford the service let alone a car to put it in.
    • A Bulgarian company “Radioelektronika” displayed an automatic mobile phone including a base station at Inforga-65, a international exhibition in Moscow. It was based on the system Leonid Kupriyanovich created. One base station connected to the landline phone system could serve 15 customers.
    • One of the main problems with the mobile phones was reliable connectivity. When you drove too far away from the tower, your call would be dropped. When you were in range of the next tower, you could place your call again. Another was channel capacity. You often needed to wait to make a call or receive one. Since each phone used a frequency pair.
    • The decade of the 70s was the dawn of the cellular phone. The idea and design dated back to 1947. Douglas H. Ring and W. Rae Young of Bell Labs came up with the idea of hexagonal cells to distribute the RF “Radio frequencies” signals. The idea was to mesh towers together to relay the RF signals. The technology didn’t exist yet to accomplish that. Once computers were used in the switching of phone calls on landline phone systems, computers were used in the management of the RF signals on the cell towers.
    • Motorola was the first company to demonstrate an actual handheld mobile phone on the 3rd of April 1973. Martin Cooper, a researcher and executive, used the phone to call Dr. Joel S. Engel at Bell Labs, his rival. This prototype weighed 4.4 lbs/2 kilograms and was 9.1″ x 5.1″ x 1.8″ or 23 x 13 x 4.5 centimeters. The battery only allowed for a 30 minute call and took 10 hours to recharge. Many referred to this has “the Brick”. It was released in October of 1983. However, it was not the first commercially available phone. That honor goes to E.F. Johson and Millicom, Inc. A subsidiary by the name of Comvik in Sweden released their phone in September of 1981.
    • Finally we are able to discuss all the G’s.
    • During the 1970s AT&T did a lot of the ground work on developments towards the new new cellular technology. Unfortunately for them, they couldn’t utilize this new technology. This was due a 1949 lawsuit from the US government and ended with a consent decree in 1956. AT&T didn’t follow the agreement and was taken back to court in the mid 1970s. In 1982 they settled the suit. The US Government forced AT&T to divest their assets. By 1994 they were able to re-enter the Cellular Phone market.
    • One of the advancements was the Signalling System 5, 6, and currently number 7. The details of which are too complex to discuss. They are mentioned due to their tie-in with SMS (Short Message Service) that will be mentioned later.
    • The technology that replaced IMTS was called AMPS “Advanced Mobile Phone System”. You can consider it 1G. You will notice that these new technologies used ideas from the 1940s. They used 800 MHz – 900 MHz bands, though each phone still uses its own frequency. There were just more of them. They achieved this with FDMA “frequency-division multiple access”. Multiple users use the same channel by dividing the bandwidth into separate non-overlapping frequency sub-channels and allocating each sub-channel to a separate user. Users phones send data through the sub-channel. This ability was due to advancements in integrated circuits.
    • AMPS was released on the 13th of October 1983, Israel in 1986, and Australia in 1987.
    • AMPS was still analog and not encrypted just like IMTS. A person could use channels 70 – 83 on a tv to listen to AMPS calls. It also didn’t have data or messaging capability. Roaming wasn’t an issue because there wasn’t any. If you went outside of the service area, you couldn’t use the phone.
    • As previous mentioned, the Brick was the first phone to use this new system. The commercial name for it was the DynaTAC 8000X. It was released on the 6th of March 1983. Later in 1988 Motorola released a portable phone based on the DynaTac series of phones. The battery, transceiver, and handset were carried in a nylon or leather bag. It was referred to as the ‘Bag Phone’.
    • The 1990s saw the creation of 2G and some incompatible standards, yay! The US created D-AMPS aka Digital AMPS aka TDMA “Time-Division Multiple Access”. It was a method of allowing phones to share the same RF spectrum. This was accomplish by giving each phone a time slot. The phone would wait until it was its turn to transmit and receive. There were two bands in use. One was used for transmit and one was used for receive.
    • The ability of using the phone on a different network was possible but took time to implement. Though once the carriers saw a revenue stream, they quickly allowed other phones to use their network.
    • Texting started later in the US around 1992. In 1993 Nokia created the SMSC (Short Message Service Center) which enabled a store and forward system for text messages. If the receiving phone wasn’t powered on or in an area without service, the SMSC would wait till the phone was available to receive the message.
    • Texting didn’t really catch on until the beginning of the 2000s. Many factors contributed to this. One was the cost per message, another was you couldn’t send messages to a different carrier, and it was tedious to type messages. The alphabet was divided among the 10 number keys, with most having 3 characters per number key. T9 was available in 1995. It used a predictive text input. In 1999 you could finally text someone that was using a different carrier. By the 2000s, some phones had a slide out qwerty keyboards or flip phone with one half the keyboard and the other was the screen. While others had the physical keys bottom of the phone body.
      Eventually the per-message fee was rolled into the price of the plan for a specified amount. Finally unlimited use was available.
    • The benefit of TDMA was that it could use the previous AMPS’s frequency bands and fall back on the previous FDMA technology. The downside was that switching between AMPS and TDMA would result in the call being dropped. This was due to AMPS using analog and TDMA using digital signals. It took the cellular companies time to upgrade each tower to TDMA.
    • In 1995 IS-95 (Interim Standard 95) was the creation of CDMA “Code-Division Multiple Access”. It was developed by Qualcomm and uses multiple transmitters using the entire frequency range of the band. CDMA encodes each transmitter which a unique ID.
    • TIA (Telecommunications Industry Association) later adopted IS-95 and was called cdmaONE.
    • The benefit of CDMA over TDMA’s time slot time sharing are:
      • When all the phones are transmitting at the same time. They are identified by their IDs by the tower and routed accordingly.
      • This also aided in calls being transferred more efficiently to the next tower.
      • Which meant there were much less dropped calls.
      • TDMA had a problem when a tower didn’t have any time slots available to transfer the call into. Which led to a dropped call.
      • As with previous generations, the identity still remains with the phone.
    • In 1995 Sprint brought GSM to the US. It used the 1900 MHz band and was operated around the Baltimore Maryland / Washington D.C. area. The service only lasted until 1999. It was sold twice before it was acquired by T-Mobile. This will be important later on in the early 2000s and for the creation of LTE.
    • Another quick mention is iDEN (Integrated Digital Enhanced Network). It was created by Motorola in 1991. When Sprint acquired Nextel they inherited the technology. In short, it was a push to talk like you would on a handheld two way radio aka walkie talkie. It failed for numerous reasons.
    • In 1983 CEPT (European Conference of Postal and Telecommunications Administrations) began to discuss standards for a new digital voice standard.
    • By 1987 13 European countries agreed upon the first new communications standard. It was called GSM “Global System for Mobile communications”. GSM used a modified TDMA.
    • The differences are as follows.
      • They added three more bands capable of more bandwidth. This further increases the utilization of each frequency.
      • GSM employed the use of codecs to compress and decompress the signals.
      • The addition of a SIM (Subscriber Identity Module). It contains a phone book and the identity information for the subscriber.
        This allows the person to use different phones. The first cards were the size of a credit/debit card. Current SIMs are about the size of an adult finger nail.
      • Some phones had the ability to switch between two SIMs.
        SIM cards were in use in other technologies prior to phones like set top boxes for satellite TV or cable TV.
      • The ability to use the phone on a different network. This was referred to as roaming. And as I mentioned before, this became a financial benefit to the phone carriers. On both sides of the pond, roaming became part of service offering. International roaming was still something a customer would need to pay extra.
    • GSM was available in 1991 and was additionally adopted as the standard used by many telephone companies in Asia, South America, and Africa.
    • European telephone operators didn’t adopt CDMA. Instead they continued improving GSM. SMS was developed in 1984 and wasn’t commercially widely available until 1994. Though Nokia had phones in 1993 that could send SMS. SMS uses features of the SS7 protocols. It allowed 160 character messages to be sent.
    • Around 2001 GPRS (General Packet Radio Service) is a data mode for GSM using Packet Switching. Which is how computer networks function. The creation of the APN (Access Point Name) was used to enable access to the Internet via the carriers network. It used the unused TDMA channels GSM implemented. It is considered 2.5G. It has a maximum of 115 kbps though the typical speed is dependent on the network. I have seen estimates of 35 to 50 kbps. Due to the better implementation of TDMA and wider bandwidth, voice and data could be used at the same time. Though this also depended on the carrier.
    • EDGE (Enhanced Data rates for GSM Evolution) aka 2.75G began in 2003. It was a further enhancement of mobile data on GSM. The bandwidth was up to 473 kbit/s and 236 kbit/s was typical. By 2008 EDGE was available in 147 countries.
    • Evolved Edge aka EDGE Evolution and considered 2.875G. I couldn’t pin down a date of release. The latency was reduced and the speed increased to a max of 1 Mbit/s. With a typical speed up to 600 kbit/s. By May of 2013, there were 604 GSM networks using EDGE in 213 countries.
    • In 1998 3GPP (3rd Generation Partnership Project and 3GPP2 (3rd Generation Partnership Project 2) was formed. The former was for GSM and the replacement UMTS. It was made up of 7 organizations from Japan, USA, China, India, South Korea, and many European countries. The latter was for CDMA and future advancement. It was made up of organizations in Japan, North America, and South Korea.
      ———————————–
    • The beginning of the 2000s we still had 2G service with slow data modes in many of the countries using GSM. The mid 2000s saw the introduction of 3G and data modes appearing in the US. It is most commonly referred to as ‘Mobile Broadband’. Another feature of 3G was providing mobile Internet to computers. This was by way of tethering with a cable attached between the computer and phone. The other way was with USB thumb drive size USB modems. They were available for 2G and 3G with comparable speed as you have on a phone.
    • The ability of having Internet access spawned a new category of devices and briefly resurrected PDA (Personal Digital Assistants). The new devices were tablets, which were basically phones with a bigger screen.
    • Another thing I want to touch on briefly is web browsing on a phone. The early phones had text/graphical interface which was similar to Gopher. Gopher was popular on computers during the 80s and 90s. The interface was called WAP (Wireless Application Protocol) which was released in 1999. It wasn’t widely adopted and had many issues. It wasn’t compatible with existing web standards and needed its own standard called WML. It was a markup language like HTML. A gateway was also required to access the content. It was used to interface with HTTP and HTTPS protocols.

      WAP 2.0 was released in 2002 and was completely re-engineered. It used a custom version of XHTML with HTTP compatibility XHTML MP. Removing the need for the gateway to broker the connection. It would be tasked with providing additional information such as the phones information.
      Some carriers started providing services for WAP 2.0 in 2003 – 2004 which allowed it to gain more popularity. Though with the ability to tether or use a USB modem on a computer. It still wasn’t widely adopted. It stuck around until 2013, which is a lot longer than it should have.
    • CDMA200 aka C2K or IMT-MC (IMT Multi-Carrier) was the name of the 3G standard using CMDA. It was designed to be backwards compatible with the 2G IS-95 standard. For data it used CDMA200 1x aka 1xRTT or 1x.
      It has a maximum speed of 153 kbit/s with an average speed between 80 and 100 kbit/s.
    • In 1999 Qualcomm developed 1xEV-DO (1x Evolution-Data Only) and was later changed to mean (Evolution-Data Optimized). As some didn’t like the ‘only’ definition. Many carriers dropped the 1x from the name and referred to it as EV-DO. It was a much faster than the previous 1x data mode.
    • EV-DO used CDMA and TDM (Time Division Multiplexing). For the sake of brevity I will not describe how it works. It was used to maximize the data throughput. The initial max speed was 2.4 Mbit/s and 3.1 Mbit/s for revision A. The average speed was about 1.8 Mbit/s.
    • EV-DO revision B had much faster rates up to 4.9 Mbit/s. Some carriers opted to bond multiple channels together for a peak rate of 14.7 Mbit/s.
    • EV-DO revision C was branded as UMB (Ultra Mobile Broadband) in 2006. It was proposed as a standard in 2007 into 2008 called EV-DV (Evolution Data and Voice). It had a proposed data rate of 280 Mbit/s.
      Despite CDMA being well suited for data and voice. Using both at the same time was not possible. Since CDMA used all the channels at once. So you were either all mobile data or all voice.
      Despite the proposed revision C. Qualcomm ended the development of EV-DV in November of 2008. It was never approved as a standard and non of the carriers used it.
    • The next iteration was SVDO (Simultaneous Voice and EV-DO). It was used with phones supporting the CDMA2000 standard. This was released in 2011 though it was short lived. It did allow for simultaneous voice and data on CDMA phones. Prior to that only GSM phones could do that. Once LTE was available, SVDO was only of use when the phone roamed into a 3G area. Since LTE allowed for calls and data at the same time.
    • Qualcomm decided to end the EVDO development and instead wanted to focus on LTE (Long Term Evolution).
    • In 2002 UMTS (Universal Mobile Telecommunications System) was the 3G replacement for GSM. It used W-CDMA (Wideband code-division multiple access) and the addition of MMS (Multimedia Messaging Service). Of which enabled files to be sent via SMS. It was also available on CDMA phone in the US. MMS enabled the ability to send messages longer than 160 characters. If a message exceeded 160, it would be converted from SMS to MMS.
    • Since GSM was adapted from TDMA. The time slot way of communication was too slow for data. CDMA in the US was already on their 2nd generation of mobile data called EV-DO which was 15 times faster than GSM’s EDGE.
      As previously mentioned they modified the CDMA2000 used in the US and called it W-CDMA. One big advantage is the use of two 5 MHz wide channels. This is compared to the CDMA2000 that uses multiple 1.25 MHz channels.

    • 3GPP released HSPA (High Speed Packet Access) in 2005. It utilizes two protocols. HSDPA (High Speed Downlink Packet Access) and HSUPA (High Speed Uplink Packet Access). It provided a peak of 2 Mbit/s but in real world use it was more like 384 kbit/s which was 3 times faster than CDMA2000’s initial 1x. This was considered 3.5G.
    • The 3.75G HSPA+ (Evolved High Speed Packet Access) was released in 2008. With further adoption in 2010. HSPA+ left CDMA200’s EV-DO Revision B in the dust. The max throughput was 337 Mbit/s. Though again, these are lab test numbers. The actual numbers were closer to EV-DO Revision B. One thing to keep in mind is that you could have simultaneous voice and data. This was due to the wider bandwidth of W-CDMA.
    • LTE (Long Term Evolution) is considered 3.95G. It is yet another transitional technology towards 4G. They use category numbers to define the speeds and capabilities. They use MIMO (Multiple Input Multiple Output). It uses multiple antenna to increase the amount of data that can be transmitted and received at once. The categories are sequential but a higher number doesn’t mean a faster download. It more depends on the category the device uses for its hardware. For example, you wouldn’t use a category that uses 4×4 MIMO when the device only has 2 antenna.
    • LTE doe not have a requirement for download / upload speed.
    • You may or may not have seen this coming but LTE is based on the UMTS/HSPA standard. The improvements were finalized on December of 2008 and the first public LTE service was available in Oslo Norway and Stockholm Sweden a year later. It made it to North America in September of 2010.
    • LTE Category 4 provides peak rate of 150 Mbit/s down and 51 Mbit up with a latency of less than 5 ms. It supports 1.4 MHz to 20 MHz channels. It also supports previous frequency division duplexing and time-division duplexing that CDMA and TDMA used. The use of faster IC’s allow to use them more efficiently. LTE aims to replace the circuit and packet switching IP method with an all IP method.
    • Other benefits are.
      • Support of movement up to 220 MPH / 500 km/h depending on the frequency band.
      • Support of higher density of cell towers, Femto and Picocells for example.
      • Backwards compatibility with GSM / UMTS / CDMA2000. For example, If a call is started on LTE and roams to a CDMA2000 area, the connection without disruption.
      • VoLTE allows calls to placed using the data instead of the traditional voice circuit-switched system.
    • LTE Advanced Category 6 was rolled out between 2011 and 2013. It increased the download peak to 300 Mbit/s and the upload peak to 51 Mbit/s.
    • For completeness sake, Mobile WiMAX (IEEE 802.16e) was a competing standard for fixed devices in 2005. Despite being a few years ahead of LTE. Carriers didn’t think fixed was going to be as popular and instead waited for LTE to service the mobile market instead. WiMax is still in use though for fixed devices instead of mobile. Sprint was one of the few carriers to use it for mobile use in 2008. It branded it as 4G even though it doesn’t support 4G requirements. They stopped using it late 2015.
    • 4G’s release date was around 2012. It has the following requirements.
      • An all IP packet switched network
      • Have a peak data rate up 100 Mbit/s for mobile and 1 Gbit/s stationary / nomadic access.
      • Dynamically share and use network resources to support more simultaneous users per cell.
      • Channel bandwidths of 5 to 20 MHz up to 40 MHz.
      • Smooth handovers for previous generation networks.
    • So you can see that LTE and 4G have more in common than they don’t.
    • 5G was released in 2019. The main difference from 4G is It has the capability to a peak speed of 10 Gbit/s and even lower latency than 4G.
      5G can use additional radios depending on type of service needed. Millimeter waves are available. They used for microwave transmission to fixed locations and mobile service in a smaller confined area than a typical cell tower. There are low-band, mid-band, and high-band. Low-band is a similar to a 4G tower’s coverage area using 600 to 900 MHz. Mid-band uses microwaves at 1.7 to 4.7 GHz allowing for speeds up to 900 MBit/s. These microwaves can’t easily go through buildings or trees. They are meant for fixed locations. The High-band uses 24 to 47 GHz. This allows for speeds in the Gigabits per second range. They are for fixed locations with high speed demands.

–Play Bodhi Corner Transition Bumper*–

Bodhi Corner

  • We’re starting work on a lot of things, including a documentation overhaul. We have had a pretty good influx of new users/community members.
  • Work also continues on Moksha and modules translations, we have new translations for Italian, Polish, Hindi, Marathi and Malayalam in progress
  • Robert has created a script for users who want to add Moksha to Debian —
  • Robert started work on packaging for BL8 on Ubuntu 24.04 in addition to the ongoing work of BL8 on Debian (Trixie). Right now, the repo version is set to Trixie, but work on the Noble repo will begin shortly.
  • Stefan has started work on a new slideshow which runs during installation.
  • Robert and Stefan have done so much work improving ePhoto that Robert felt he had to add entice to the repos for BL 7 and 8. Entice is an even simpler photo viewer which uses EFL.
  • Bodhi Linux is looking for sponsorship. We would also appreciate it if any of you would regularly visit the Bodhi Linux page on Distrowatch; our ranking has fallen to #60, and we usually are around #50. Of course, if we can get higher than that, we would love it… Not every distro has rabid followers like AntiX…
  • And finally, in other news, AV_Linux started work on a Moksha version.

Shape 1 — Play Vibrations Transition Bumper —

Vibrations from the Ether

20 minutes (~5 minutes each)

— Play Check This Transition Bumper —

Check This Out

10 minutes

Housekeeping & Announcements

  • Thank you for listening to this episode of mintCast!
  • If you see something that you think we should be talking about, tell us!

Send us email at [email protected]

Join us live on Youtube

Post at the mintCast subreddit

Chat with us on Discord and Telegram

Or post directly at https://mintcast.org

Wrap-up

Before we leave, we want to make sure to acknowledge some of the people who make mintCast possible:

  • Bill for our audio editing and for hosting the server which runs our website, website maintenance, and the NextCloud server on which we host our show notes and raw audio
  • Archive.org for hosting our audio files
  • Hobstar for our logo, initrd for the animated Discord logo
  • Londoner for our time syncs and various other contributions
  • The Linux Mint development team for the fine distro we love to talk about <Thanks, Clem … and co!>

— Play Closing Music and Standard Outro —

Linux Mint

The distribution that spawned a podcast. Support us by supporting them. Donate here.

Archive.org

We currently host our podcast at archive.org. Support us by supporting them. Donate here.

Audacity

They’ve made post-production of our podcast possible. Support us by supporting them. Contribute here.

mintCast on the Web

Recent Comments

This work is licensed under CC BY-SA 4.0

This Website Is Hosted On:

Thank You for Visiting