Episode 431 Show Notes

Welcome to mintCast

the Podcast by the Linux Mint Community for All Users of Linux

This is Episode 431!

This is Episode 431.5!

Recorded on Sunday, February 18, 2024

Nuked and Paved im Joe; hanging in there, I’m Moss; Being Dizzy, I’m Majid; …Eric

— Play Standard Intro —

  • First up in the news: Linux Mint Monthly News, Google Is Dying News, Samsung defends GenAI, India bans Protonmail, NGINX Core Dev Quits and Forks,
  • In security and privacy: Critical bootkit vulnerability affects most Linux distros, BitLocker smashed in minutes with pico SBC, and Mozilla helps you wipe your data;
  • Then in our Wanderings: Joe, Bill, Moss, Majid finally “gets” Arch and Eric
  • In our Innards section:
  • And finally, the feedback and a couple of suggestions

— Play News Transition Bumper —

The News

20 minutes

  • Linux Mint Monthly News
    • from Linux Mint blog
    • Linux Mint 20.3
      • The latest version of the upgrade tool, which features improved detection and handling of orphan/foreign packages, was backported to Linux Mint 20.3.
      • This makes the upgrade path from 20.3 to 21 much easier than before.
      • The stable release for Linux Mint 21.3 was announced early this month.
      • The upgrade path from 21, 21.1 and 21.2 was opened shortly after.
      • We fixed a few regressions which had gone through the BETA phase unnoticed:
        • mintstick couldn’t handle space characters in the file path
        • Cinnamon screensaver showed a black screen in HiDPI when all windows were minimized or no windows were open
        • When multiple users were logged in, Cinnamon session only showed a dialog window on quit/logout for the first user
    • Wayland
      • It came to our attention that Wayland sessions in Linux Mint 21.3 could potentially affect Xorg sessions and triggered specific issues.
      • Until these issues are fixed, we’d like to raise awareness on this and remind you that Wayland support in 21.3 is experimental. Although it is possible to switch from Wayland to Xorg with a simple logout, we recommend a reboot.
    • Edge
      • The EDGE ISO for Linux Mint 21.3 was released with a 6.5 kernel.
      • The following regressions were also fixed in this ISO:
        • Support for loopback.cfg
        • Invalid /EFI/boot/bootx64.efi file (which broke Rufus support)
      • Note that Rufus 4.4 also added a workaround, so it now works with LMDE 6 and Linux Mint 21.3.
      • For more info: https://github.com/linuxmint/linuxmint/issues/622
    • LMDE 6
      • All the new features from Linux Mint 21.3 were backported to LMDE 6.
    • Mint 22
      • The codename for Linux Mint 22 was chosen. It will be “Wilma”.
      • Here’s a sneak peek at one of its features..
      • Its Cinnamon edition will include a new Nemo Actions Organizer:
      • You’ll be able to organize your Nemo actions in menus and submenus.
      • This tool will support nested submenus, menu icons, separators and drag-and-drop.
      • You’ll also be able to rename actions and override their icon.
  • Google will no longer back up the Internet: Cached webpages are dead
    • from ArsTechnica
    • Google will no longer be keeping a backup of the entire Internet. Google Search’s “cached” links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off. Google “Search Liaison” Danny Sullivan confirmed the feature removal in an X post, saying the feature “was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.”
    • The feature has been appearing and disappearing for some people since December, and currently, we don’t see any cache links in Google Search. For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search. For now, the cached version of Ars Technica seems to still work. All of Google’s support pages about cached sites have been taken down.
    • Cached links used to live under the drop-down menu next to every search result on Google’s page. As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing. That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data. Google is in the era of cost savings now, so assuming Google can just start deleting cache data, it can probably free up a lot of resources.
    • Cached links were great if the website was down or quickly changed, but they also gave some insight over the years about how the “Google Bot” web crawler views the web. The pages aren’t necessarily rendered like how you would expect. In the past, pages were text-only, but slowly the Google Bot learned about media and other rich data like javascript (there are a ton of specialized Google Bots now). A lot of Google Bot details are shrouded in secrecy to hide from SEO spammers, but you could learn a lot by investigating what cached pages look like. In 2020, Google switched to mobile-by-default, so for instance, if you visit that cached Ars link from earlier, you get the mobile site. If you run a website and want to learn more about what a site looks like to a Google Bot, you can still do that, though only for your own site, from the Search Console.
    • The death of cached sites will mean the Internet Archive has a larger burden of archiving and tracking changes on the world’s webpages.
  • There is no such thing as a real picture’: Samsung defends AI photo editing on Galaxy S24
    • from TechRadar
    • Like most technology conferences in recent months, Samsung’s latest Galaxy Unpacked event was dominated by conversations surrounding AI. From two-way call translation to gesture-based search, the Samsung Galaxy S24 launched with several AI-powered tricks up its sleeve – but one particular feature is already raising eyebrows.
    • Set to debut on the Galaxy S24 and its siblings, Generative Edit will allow users to artificially erase, recompose and remaster parts of an image in a bid to achieve photographic perfection. This isn’t a new concept, and any edits made using this generative AI tech will result in a watermark and metadata changes. But the seamlessness with which the Galaxy S24 enables such edits has understandably left some Unpacked-goers concerned.
    • Samsung, however, is confident that its new Generative Edit feature is ethical, desirable and even necessary in today’s misinformation-filled world. In a revealing interview with TechRadar, Samsung’s Head of Customer Experience, Patrick Chomet, defended the company’s position on AI and its implications.
    • “There was a very nice video by Marques Brownlee last year on the moon picture,” Chomet told us. “Everyone was like, ‘Is it fake? Is it not fake?’ There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. […] You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop.”
    • “But still, questions around authenticity are very important,” Chomet continued, “and we [Samsung] go about this by recognizing two consumer needs; two different customer intentions. Neither of them are new, but generative AI will accelerate one of them.
    • “One intention is wanting to capture the moment – wanting to take a picture that’s as accurate and complete as possible. To do that, we use a lot of AI filtering, modification and optimization to erase shadows, reflections and so on. But we are true to the user’s intention, which was to capture that moment.
    • “Then there is another intention, which is wanting to make something. When people go on Instagram, they add a bunch of funky black and white stuff – they create a new reality. Their intention isn’t to recreate reality, it’s to make something new. So [Generative Edit] isn’t a totally new idea. Generative AI tools will accelerate that intention exponentially in the next few years […] so there is a big customer need to distinguish between the real and the new. That’s why our Generative Edit feature adds a watermark and edits the metadata, and we’re working with regulatory bodies to ensure people understand the difference.”
    • On the subject of AI regulation, Chomet said that Samsung “is very aligned with European regulations on AI,” noting that governments are right to express early concerns around the potential implications of widespread AI use.
    • “The industry needs to be responsible and it needs to be regulated,” added Chomet, noting that Samsung is actively working on that. “Our new technology is amazing and powerful – but like anything, it can be used in good and bad ways. So, it’s appropriate to think deeply about the bad ways.”
    • As for how Generative Edit will end up being used on Samsung’s new Galaxy phones, only time will tell. Perhaps the feature will simply help average smartphone users (i.e. those unfamiliar with Photoshop) get the photos they really want, rather than facilitate mass photo fakery. Indeed, it still remains to be seen whether generative AI tech as a whole will be a benefit or a hindrance to society as we know it.
  • Nginx core developer quits project in security dispute, starts “freenginx” fork
    • from ArsTechnica
    • A core developer of Nginx, currently the world’s most popular web server, has quit the project, stating that he no longer sees it as “a free and open source project… for the public good.” His fork, freenginx, is “going to be run by developers, and not corporate entities,” writes Maxim Dounin, and will be “free from arbitrary corporate actions.”
    • Dounin is one of the earliest and still most active coders on the open source Nginx project and one of the first employees of Nginx, Inc., a company created in 2011 to commercially support the steadily growing web server. Nginx is now used on roughly one-third of the world’s web servers, ahead of Apache.
    • Nginx Inc. was acquired by Seattle-based networking firm F5 in 2019. Later that year, two of Nginx’s leaders, Maxim Konovalov and Igor Sysoev, were detained and interrogated in their homes by armed Russian state agents. Sysoev’s former employer, Internet firm Rambler, claimed that it owned the rights to Nginx’s source code, as it was developed during Sysoev’s tenure at Rambler (where Dounin also worked). While the criminal charges and rights do not appear to have materialized, the implications of a Russian company’s intrusion into a popular open source piece of the web’s infrastructure caused some alarm.
    • Sysoev left F5 and the Nginx project in early 2022. Later that year, due to the Russian invasion of Ukraine, F5 discontinued all operations in Russia. Some Nginx developers still in Russia formed Angie, developed in large part to support Nginx users in Russia. Dounin technically stopped working for F5 at that point, too, but maintained his role in Nginx “as a volunteer,” according to Dounin’s mailing list post.
    • Dounin writes in his announcement that “new non-technical management” at F5 “recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.” While it was “quite understandable,” given their ownership, Dounin wrote that it means he was “no longer able to control which changes are made in nginx,” hence his departure and fork.
    • Comments on Hacker News, including one by a purported employee of F5, suggest Dounin opposed the assigning of published CVEs (Common Vulnerabilities and Exposures) to bugs in aspects of QUIC. While QUIC is not enabled in the most default Nginx setup, it is included in the application’s “mainline” version, which, according to the Nginx documentation, contains “the latest features and bug fixes and is always up to date.”
    • The commenter from F5, MZMegaZone, seemingly the principal security engineer at F5, notes that “a number of customers/users have the code in production, experimental or not” and adds that F5 is a CVE Numbering Authority (CNA).
    • Dounin expanded on F5’s actions in a later mail response.
      • The most recent “security advisory” was released despite the fact that the particular bug in the experimental HTTP/3 code is expected to be fixed as a normal bug as per the existing security policy, and all the developers, including me, agree on this.
      • And, while the particular action isn’t exactly very bad, the approach in general is quite problematic.
    • Asked about the potential for name confusion and trademark issues, Dounin wrote in another response about trademark concerns: “I believe [they] do not apply here, but IANAL [I am not a lawyer],” and “the name aligns well with project goals.”
    • MZMegaZone confirmed the relationship between security disclosures and Dounin’s departure. “All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental,” MZMegaZone wrote on Hacker News. He later added, “I don’t think having the CVEs should reflect poorly on NGINX or Maxim. I’m sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously.”
    • Dounin, reached by email, pointed to his mailing list responses for clarification. He added, “Essentially, F5 ignored both the project policy and joint developers’ position, without any discussion.”
    • MegaZone wrote to Ars (noting that he only spoke for himself and not F5), stating, “It’s an unfortunate situation, but I think we did the right thing for the users in assigning CVEs and following public disclosure practices. Rational people can disagree and I respect Maxim has his own view on the matter, and hold no ill will toward him or the fork. I wish it hadn’t come to this, but I respect the choice was his to make.”
    • A representative for F5 wrote to Ars that: “F5 is committed to delivering successful open source projects that require a large and diverse community of contributors, as well as applying rigorous industry standards forassigning and scoring identified vulnerabilities. We believe this is the right approach for developing highly secure software for our customers and community, and we encourage the open source community to join us in this effort.”
  • Indian government moves to ban ProtonMail after bomb threat
    • https://www.androidcentral.com/apps-software/indian-government-moves-to-ban-protonmail-after-bomb-threat
    • The Indian government’s Information Technology ministry issued an order to block ProtonMail in the region.
    • The move comes after a bomb threat was sent to schools in Chennai via a ProtonMail account.
    • ProtonMail is still active in the country as of writing, but it remains to be seen if that will continue to be the case.
    • ProtonMail is the best choice if you want an end-to-end encrypted email platform, and the nature of the service means it is inevitably used by bad actors. On February 8, a bomb threat was sent to 13 schools in Chennai, a city in the state of Tamil Nadu in southern India. The threat turned out to be a hoax, and the Tamil Nadu police found that the email was sent via a ProtonMail account.
    • Unable to trace the IP address of the sender and failing to get assistance from Interpol, the Tamil Nadu police put in a request to India’s Ministry of Electronics and Information Technology to block access to ProtonMail within the country, according to Hindustan Times. That request was granted today, with the government authority issuing an order to block the service in the region. The enforcement will be carried out by the Department of Telecommunications, which will likely entail delisting ProtonMail from the App Store and Play Store. That said, the website is still working as of writing, and the app is listed on both storefronts. This isn’t the first instance where the Indian government went after the Swiss-based Proton AG. Proton VPN pulled its servers out of the country following a controversial 2022 ruling by the Indian government mandating providers to maintain usage logs
    • As for the ProtonMail ban, Proton AG sent a statement to Hindustan Times that it was working with the government over the issue. “We are currently working to resolve this situation and are investigating how we can best work together with the Indian authorities to do so. We understand the urgency of the situation and are completely clear that our services are not to be used for illegal purposes. We routinely remove users who are found to be doing so and are willing to cooperate wherever possible within international cooperation agreements.”
    • The issue is that Proton AG didn’t hand over the IP address to Indian authorities. As the email provider told Hindustan Times, it cannot do that under Swiss law: “Proton cannot answer directly to foreign law enforcement authorities, but Swiss authorities may assist foreign authorities with requests, provided they are valid under international assistance procedures and determined to be in compliance with Swiss law”
    • The government’s move is in line with a recent policy that has targeted services with end-to-end encryption. A host of encrypted apps were blocked at the start of last year — including the likes of Threema, Element, Wickrme, and Safeswiss — and the government is going after WhatsApp to disable end-to-end encryption, although it isn’t clear how that would even work.
    • Proton AG clearly agrees with that sentiment, as it said this to HT: “We condemn a potential block as a misguided measure that only serves to harm ordinary people. Blocking access to Proton is an ineffective and inappropriate response to the reported threats. It will not prevent cybercriminals from sending threats with another email service and will not be effective if the perpetrators are located outside of India.”

— Play Security Transition Bumper —

Security and Privacy

10 minutes

  • Critical vulnerability affecting most Linux distros allows for bootkits
    • from ArsTechnica
    • Linux developers are in the process of patching a high-severity vulnerability that, in certain cases, allows the installation of malware that runs at the firmware level, giving infections access to the deepest parts of a device where they’re hard to detect or remove.
    • The vulnerability resides in shim, which in the context of Linux is a small component that runs in the firmware early in the boot process before the operating system has started. More specifically, the shim accompanying virtually all Linux distributions plays a crucial role in secure boot, a protection built into most modern computing devices to ensure every link in the boot process comes from a verified, trusted supplier. Successful exploitation of the vulnerability allows attackers to neutralize this mechanism by executing malicious firmware at the earliest stages of the boot process before the Unified Extensible Firmware Interface firmware has loaded and handed off control to the operating system.
    • The vulnerability, tracked as CVE-2023-40547, is what’s known as a buffer overflow, a coding bug that allows attackers to execute code of their choice. It resides in a part of the shim that processes booting up from a central server on a network using the same HTTP that the the web is based on. Attackers can exploit the code-execution vulnerability in various scenarios, virtually all following some form of successful compromise of either the targeted device or the server or network the device boots from.
    • “An attacker would need to be able to coerce a system into booting from HTTP if it’s not already doing so, and either be in a position to run the HTTP server in question or MITM traffic to it,” Matthew Garrett, a security developer and one of the original shim authors, wrote in an online interview. “An attacker (physically present or who has already compromised root on the system) could use this to subvert secure boot (add a new boot entry to a server they control, compromise shim, execute arbitrary code).”
    • Stated differently, these scenarios include:
      • Acquiring the ability to compromise a server or perform an adversary-in-the-middle impersonation of it to target a device that’s already configured to boot using HTTP
      • Already having physical access to a device or gaining administrative control by exploiting a separate vulnerability.
    • While these hurdles are steep, they’re by no means impossible, particularly the ability to compromise or impersonate a server that communicates with devices over HTTP, which is unencrypted and requires no authentication. These particular scenarios could prove useful if an attacker has already gained some level of access inside a network and is looking to take control of connected end-user devices. These scenarios, however, are largely remedied if servers use HTTPS, the variant of HTTP that requires a server to authenticate itself. In that case, the attacker would first have to forge the digital certificate the server uses to prove it’s authorized to provide boot firmware to devices.
    • The ability to gain physical access to a device is also difficult and is widely regarded as grounds for considering it to be already compromised. And, of course, already obtaining administrative control through exploiting a separate vulnerability in the operating system is hard and allows attackers to achieve all kinds of malicious objectives.
    • That said, obtaining the ability to execute code during the boot process, before the main operating system starts, constitutes a major escalation of whatever access an attacker already has. It means the attacker can neutralize many forms of endpoint protection designed to detect compromises. As such, the attack allows for the installation of a bootkit, the term for malware that runs prior to the OS. Unlike many bootkits, however, the one created by exploiting CVE-2023-40547 won’t survive the wiping or reformatting of a hard drive.
    • Garrett explained:
      • In theory this shouldn’t give an attacker the ability to compromise the firmware itself, but in reality it gives them code execution before ExitBootServices (the handoff between the firmware still running the hardware and the OS taking over) and that means a much larger attack surface against the firmware—the usual assumption is that only trusted code is running before ExitBootServices. I think this would still be called a boot kit—it’s able to modify the OS bootloader and kernel before execution. But it wouldn’t be fully persistent (if you wipe the disk it’d be gone).
    • Fixing the vulnerability involves more than just excising the buffer overflow from the shim code. It also requires updating the secure boot mechanism to revoke vulnerable bootloader versions. That, in turn, raises some level of risk. Paul Asadoorian, principal security evangelist at Eclypsium and author of the blog post that raised awareness of the vulnerability, explained:
      • Users could run into a situation where a DBX (revocation list) update is being applied to their system that defines the currently installed bootloader as invalid in Secure Boot. In this case, upon reboot, Secure Boot would halt the boot process. As long as the user can get into their BIOS/UEFI settings, this can be remedied by temporarily disabling Secure Boot (if the user has set a BIOS password this would make recovery extremely difficult). The Linux utility fwupd has facilities to update the Secure Boot DBX and will provide warnings to the user if the currently installed bootloader is in the pending DBX update.
    • Another challenge in updating, Asadoorian said, involves the finite amount of space reserved for storing revocations in a portion of the UEFI known as the DBX. Some lists could contain more than 200 entries that must be appended to the DBX. With many shims capping the space at 32 kilobits, this capacity could be close to running out of space.
    • Yet another step in the patch process is signing newly patched shims using a Microsoft third-party certificate authority.
    • Developers overseeing Linux shims have released the patch to individual shim developers, who have incorporated it into each version they’re responsible for. They have now released those versions to Linux distributors, who are in the process of making them available to end users.
    • The risk of successful exploitation is mostly limited to extreme scenarios, as noted earlier. The one scenario where exploitation is most viable—when devices receive boot images over an unencrypted HTTP server—is one that should never happen in 2024 or the past decade, for that matter.
    • That said, the harm from successful exploitation is serious and is the reason for the severity rating of 9.8 out of a possible 10. People should install patches promptly once they become available.
  • BitLocker encryption broken in 43 seconds with sub-$10 Raspberry Pi Pico — key can be sniffed when using an external TPM
    • from Tom’s Hardware
    • Bitlocker is one of the most easily accessible encryption solutions available today, being a built-in feature of Windows 10 Pro and Windows 11 Pro that’s designed to secure your data from prying eyes. However, YouTuber stacksmashing demonstrated a colossal security flaw with Bitlocker that allowed him to bypass Windows Bitlocker in less than a minute with a cheap sub-$10 Raspberry Pi Pico, thus gaining access to the encryption keys that can unlock protected data. After creating the device, the exploit only took 43 seconds to steal the master key.
    • To do this, the YouTuber took advantage of a known design flaw found in many systems that feature a dedicated Trusted Platform Module, or TPM. For some configurations, Bitlocker relies on an external TPM to store critical information, such as the Platform Configuration Registers and Volume Master Key (some CPUs have this built-in). For external TPMs, the TPM key communications across an LPC bus with the CPU to send it the encryption keys required for decrypting the data on the drive.
    • Stacksmashing found that the communication lanes (LPC bus) between the CPU and external TPM are completely unencrypted on boot-up, enabling an attacker to sniff critical data as it moves between the two units, thus stealing the encryption keys. You can see his method in the video below.
    • With this in mind, the YouTuber decided to test an attack on a ten-year-old laptop with Bitlocker encryption. His specific laptop’s LPC bus is readable through an unpopulated connector on the motherboard, located right next to one of the laptop’s M.2 ports. This same type of attack can be used on newer motherboards that leverage an external TPM, but these typically require more legwork to intercept the bus traffic.
    • To read data off the connector, the YouTuber created a cheap Raspberry Pi Pico device that could connect to the unsecured connector just by making contact with the metal pads protruding from itself. The Pico was programmed to read the raw 1s and 0s off from the TPM, granting access to the Volume Master Key stored on the module.
    • Stacksmashing’s work demonstrates that Windows Bitlocker, as well as external TPMs, aren’t as safe as many think because the data lanes between the TPM and CPU are unencrypted. The good news is that this attack method, which has been known for some time, is relegated to discrete TPMs. If you have a CPU with a built-in TPM, like the ones in modern Intel and AMD CPUs, you should be safe from this security flaw since all TPM communication occurs within the CPU itself.
  • Mozilla’s new service tries to wipe your data off the web – for a price
    • from Mozilla Blog
    • Mozilla is introducing a new paid subscription privacy monitoring service called Mozilla Monitor Plus. For $8.99 a month under its annual subscription, Mozilla says it will automatically keep a lookout for your information at over 190 sites where brokers sell information they’ve gathered from online sources like social media sites, apps, and browser trackers, and when your info is found, it will automatically try to get it removed.
    • Mozilla Monitor product manager Tony Cinotto told The Verge in an email that Mozilla partners with a company called Onerep to perform these scans and subsequent takedown requests. While requests usually take between seven and 14 days to process, he says sometimes information can’t be removed. Mozilla will keep trying, he added, but will also give Plus members instructions for attempting removal themselves.
    • A GIF showing the process of setting up a scan with Mozilla Monitor.
    • Mozilla Monitor in action. Image: Mozilla
    • Basic Monitor members will get a free scan and one-time removal sweep, plus continual monthly data broker scans afterward, Mozilla says. The paid subscription builds on the free dark web monitoring of Mozilla Monitor (previously Firefox Monitor), a service Mozilla debuted in 2018. Mozilla has offered other privacy-focused services in the last few years, such as Mozilla VPN and Firefox Relay.
    • Mozilla says its data broker scans can find details online like your name and current and previous home addresses but adds that it could go as deep as criminal history, hobbies, or your kids school district.
    • Services like this are fairly common, but they’re not all that well known to most people, and searching for them is as likely to turn up sketchy scam sites as it is legitimate service providers like, for instance, DeleteMe. That makes it difficult to suss out trustworthy companies, which is really where Mozilla’s reputation as a privacy-first subsidiary of the open-source nonprofit Mozilla Foundation could help.
    • Mozilla Monitor Plus is available now for $8.99 per month, while standard Mozilla Monitor remains free.
    • Counterpoint: The Linux Experiment reported (2/17) that Mozilla has laid off 60 employees (5$ of their workforce)… and that most of the layoffs were staff working on the Monitor Plus project.

— Play Wanderings Transition Bumper —

Bi-Weekly Wanderings

30 minutes (~5-8 mins each)

  • Joe
    • I pulled out my 8bitdo controller that I have the 3d printed phone mount attached so that I could test out some of the gaming in regards to the new phone. The s24 ultra.
    • I set up my usual with NES and Sega Genesis. I finally finished my ‘speed run’ of the original Super Mario Bros. Definitely did not set any records and I did need to use save states but I was happy with the outcome.
    • I also setup my all time favorite game, the Genesis version of Shadowrun. For that one I decided to try something a little bit different and use DEX and one of the portable monitors that I got for Christmas. After changing the settings for retroarch I got it to full screen and it looks great. Plays very smooth as well
    • A couple of times I have really thought about buying a switch just to get the remastered version of GTA VC but I also found out that you can get it on android for free if you have Netflix. and I do. I also decided to play that with dex and it looks brilliant. It is a bit hard to control but that game always was.
    • The whole game looks brilliant and between the emulation and the ROMS that I already have, along with the excellent feeling 8bitdo controller and the RetroPi I have setup, I think I will be good to go in the classic gaming department for a while.
    • Also after a couple of days of playing it started dying randomly. I might play 5 or ten minutes and then the game would just shut off and I would get dumped back to the main page of my phone. Whether or not it was hooked to dex
    • The other day I turned on my full backup machine to copy all of my Nextcloud setup. That’s the OneGX BTW with a Micro SD card. it was going a little slow so I turned on my travel router out in the garage to speed things up. That worked for a while but then the whole network dropped in my garage. But just in my garage the rest of the house was fine.
    • This led me to assume that it was either the network switch in the garage or the cables leading to it. I would get a few seconds of connection when I restarted the switch but then it would drop again.
    • I restarted the routers and the modem and the switch but nothing was working. then I decided to switch around the cables and when I disconnected my travel router everything started working fine. More testing will be required to see what the issue actually is.
    • It is obviously not the cable to the router since I was able to log directly into that.
    • And right at the time that I fixed that issue and got the network back up and running I started getting very bad artifacting on my screens on the server in my garage. This kinda lead into the innards because we need more ideas for topics.
    • What I ended up doing though was buying a slightly better portable router. The GL.iNet GL-SFT1200 which does both 2.4 and 5ghz. It is pretty fast and really stable. For now I have it set to be an access point so it has not dhcp table of its own and is providing excellent wifi in my garage. Later I plan to test it as a repeater or a reciever. It will take the wifi connection from the rest of the house and turn it into a wired connection to my switch in the garage. I want to see if that give me a faster connection out there.
    • It is a bit more than I would normally spend on a portable router being 40 dollars and I generaly prefer to spend 15 to 20. But I think the extra ethernet port along with gigabit connections and the faster wifi band is worth it since I plan on using it more reqularly while still being able to grab it and go on the road with it
    • I also recieved a couple of things from Moss in the mail. A jelly Comb trackball mouse with a potentially dodgey right click that I am currently testing with. And a 17 inch system76 kudu3 with a mostly siezed hinge. That one I need to strip all the way down and see what is causing the hinge to not move freely or see if I can find a potential replacement. The laptop looks awesome and I am sure runs like a beast.
  • Moss
    • I got a couple days of work. Got to play with the PineTab 2 a fair bit. Decent battery life, but some things just would not open, combination of the experimental status of the device and the school wifi.
    • I had a guy come over to look at my cheaper guitar, an Alvarez semi-acoustic, and he wound up buying my better one, and Ibanez full acoustic, both with pickups. $85 difference in price. I still have two guitars for sale, a 6-string semi-acoustic and a 12-string. After thinking about it, I pulled the semi-acoustic from the market, but will sell it if the right buyer shows up.
    • I haven’t had much time to do anything beyond personal maintenance. I’m really letting my podcast teams down, leaving most of the infrastructure work to them. Hazards of getting old, I guess, but it doesn’t feel good.
    • I did make a $4 sale on my Bandcamp page, I guess that’s something. $3.71 after Bandcamp and PayPal got their share. If you feel a need to spend money, please visit my Bandcamp page, link in the Show Notes. Most of my music can be downloaded for free if you don’t want to pay for it, but I hope you will.
  • Majid
    • From a linux perspective, I think i’ve finally got the appeal and usability of Arch distros. As mentioned before, iI’ve got Manjaro on my Asus Zenbook. I was getting really annoyed with the jank and paper cuts using it. However I can see why people use it.Software. Yes, software may not be officially supported, or you may not be able to get a .deb or .rpm like you can on debian/ubuntu/fedora etc. But you get it all in the AUR. Like evrything! I was looking for some crypto wallets, and was able to find them in the AUR. I might still distro-hop, but at least I’ve learnt something.
    • Speaking of distro-hopping, I was planning on going to Mint on the Zenbook. For someone who is on mintcast, and a over a a decade on Linux, I havent used Mint a whole a lot. I realised that even my recent forays into 21.2 and Feren OS have been on my desktop rig. One of the reasons I had shied away from Mint (and Cinnamon in particular) was a perceived lack of touchscreen and touchpad gestures. Well Mint have been doing a lot of work on that front (even on X11) and so I decided to download the latest 21.3 ISO. I went for the EDGE ISO as its a lot newer then the normal version. I suppose I could wait will 22, but that will robably hit in May, so a good few months away.
    • Speaking of booting ISOs, I finally got around to trying Ventoy. I know weve discussed it a lot on the programme, but I only tried it this week. I love it! Makes distro-hopping a breeze! Well recommended.
    • Not been too well thsi week, had to take a few days off work due to dizziness, I think it was a viral cold, that then affected by ears (labyrinthitis) and it almost casued me to have an accident on the way to work. I then decided it wasnt safe for me to drive (or practice) so went home and told the dept. They were ok about it (i think! Not been back yet).
    • Speaking of which, I got my monthly 24 hour shift tomorrow. Fun.
    • Interesting IT related thing at one of the private hospitals I work at. Theyve implemented a paperless electronic record and prescribing. It always amazes me how rubbish these software solutions are. I mean this wasnt the worst I’ve used, but how is it 2024 and we still have these challenges?
    • Managed to sell most of the gear I had for sale. The phones and the tablet have gone. The laptop and earbuds havent. I have realised that you lose a lot of money on these due to release cylcles, so it might be more cost effective to keep them
    • Really enjoying the S24 Ultra, is it worth its price? Probably not, but the whole package works really well, and camera is really good.
    • Enjoying the pasting the Tories are getting in by-elections. #Schadenfreude.
    • Kids had a week off school this week, they seemed to spend most of it in bed or watching TV, which tbh is what I would do.
    • Got into a Irish gangsters series on BBC Iplayer called Kin. Not exactly original, but fantastically acted. Charlie Cox, Ciaran Hinds, Aiden Gillen, real quality. Started Mr & Mrs Smith, bit slow. I really appreciate the netflix function of increasing the speed. Maybe I need to develop some patience! (pun intended).
    • Corey Taylors new album is pretty good. Also listening to Halocene (covers band). They do a good version of Bury a Friend. Need some more music recommendations! C’mon you know you want to!
  • Eric

— Play Innards Transition Bumper —

Linux Innards

30 minutes (~5-8 minutes each)

  • This is to go over the process that lead me to nuke and pave my server not once but twice and to cover some of the applications that I consider essential in my set up because I installed them the second time. Also to cover what worked and what didn’t
  • I am not sure what actually caused the issue but the other day I started getting artifacting in chrome again. This is after getting a new graphics card from Bill. Yes it was an older graphics card but still very powerful and greatly appreciated.
  • The artifacting wasn’t too much of a problem but then suddenly my resolution was way off and very low. Also both monitors had the same image and displays couldn’t change the resolution or separate the monitors. And if I switched the second monitor away to view my work computer then it would not be detected when I switched back.
  • I was able to put a temporary fix in for the resolution by adjusting the grub to have a specific resolution at boot time and that made things usable during the trouble shooting process.
  • Since the computer was not detecting the type or resolution of the monitors the first thing that I decided to do was check the cables and the adapters. I had gotten a couple of extra display port to HDMI adapters and some extra HDMI cables but that made no change.
  • My concern at that point was that either the graphics card or the power supply had a hardware issue. But since there was very little that I could do about that I put it on the back burner for the rest of the testing process.
  • The next thing that I tested was the drivers to the extent that I could. I was looking through the logs to see if there was some kind of error. Using xrandr I could see that it looked like the drivers had not loaded the way they probably should have
  • I spent several hours digging through system logs and trying to unload and reload modules and even tried to use the amdgpu drivers from AMD These were giving me the same errors that they were last time I nuked and paved where it could not install amdgpu-dkms.
  • Also in looking through the drivers that were provided from AMD it doesn’t look like this specific model was included.
  • I decided to try an upgrade to 21.3 following the upgrade path and while the upgrade worked without any issues I still had the same problem. I was hoping that the new kernel would correct the problem but it did not.
  • Still not being certain that it was not a hardware issue I decided that the easiest way to make sure it wasn’t a hardware issue was to load up a live disk and see what happened to the monitors.
  • So I downloaded the most recent version of mint and loaded it up onto my Ventoy stick and gave it a test.
  • The hardware was fine, everything worked as it should in the live image. But by this point I am getting a little frustrated with the drivers and decide to do a nuke pave and restore. I log back into mint and I make sure that my backups are recent. Including my applications and my VM that I use. That VM needed to be copied to an external drive so that it could be copied back later when things were up and running.
  • This time I also made sure to copy my fstab as I forgot last time and it made things a little bit more difficult to get set back up. At least in regards to a lot of the automation that I have
  • Well I load back into the live ISO and do a full nuke and pave. This went well. Everything loaded back up and the monitors were working as they should. Next thing to do was use the built in tools to do a full restore. First the applications and then the files including all the configuration files for all the applications.
  • The nice thing about the restore tool is that it will tell you which applications it was not able to restore because they are not in the repos. Which would give me a nice starting point to getting things running the way they were before. Things like Chrome and Audiobookshelf and a few others. I made sure to put those back before I did the files restore and this allowed me to continue from right where I left off with Chrome.
  • At this point I also restored the fstab in order to put all my various mount points back where they belong so that all of my automation would just plain work. I made a mistake when doing it and restored the file whole instead of copying and pasting the portion that had the external mount points. This broke my boot which I found out about shortly after when I restarted the PC to make sure that everything still worked.
  • I was still early in the process but I tried to fix my mistake anyway and restore the other fstab but I guess there was another error somewhere else because it would still only load into the command line. Startx would only throw permission errors even when running as root. So I decided to go a different way and start the whole process over with another nuke and pave. Without doing a restore this time but building everything back up again from scratch except for the VM.
  • So after I nuked and paved again I started installing applications as I thought I would need them.
    • VLC comes first. I almost do this by reflex when I install a new system. It has been my favorite video player for years and is also a great tool for testing things like cameras.
    • Next I installed chrome and logged in so that I could have my bookmarks and be able to test as I started setting up various services again
    • Then came all the communication applications that I use. Telegram, Discord, Whatsapp. Although I did switch to making a Whatsapp webapp a while back and it seems to be much more stable than the whatsapp-desktop application.
    • Next I installed snapd. Yes, snap, I tend to use what works and what makes things easy. I installed snap so that I could install Nextcloud but I kept getting errors with any snap based application that I attempted to install and my research did not yield any kind of results. So because I had a lot of other things to do I moved on to installing Nextcloud using the community script. I usually don’t like to install from other peoples scripts but I thought I would give it a try. Something did not want to work with that as well and I ended up installing docker and running it from there. I was able to point it back at the old folder for Nextcloud that I had on an external drive. This did cause some issues because of the volume of things that were being copied again but what can you do.
    • I have not used docker for a while but I did add the commands to make it run every time I restart and every time it has an issue.
    • I also needed to relearn how to exec commands so that I could go back and add the trusted domains
      • docker run -d -p 8080:80 amd64/nextcloud
      • docker update –restart=always #########
      • docker exec –user www-data condescending_morse php occ config:system:set trusted_domains 1 –value ADDRESS_PUBLIC
    • After that I installed Audiobookshelf. For this I was able to add and use the repos for the install. it means that I will have to redo a lot the matching that I have done previously, again. I will need to figure out a way to keep a copy of all that work somewhere. I also probably should have used the docker for this since I will be running docker anyway with Nextcloud. something to keep in mind for next time.
    • Next I rebuilt the fstab using just the parts of the old fstab that I needed and restarted the system to make sure that everything worked. Woohoo all good.
    • I then installed the flatpak of Nextcloud desktop along with flatseal so that I could make sure that it had the correct permissions. With that done I downloaded the appimages for Joplin and CURA Arachne beta 2 and started restoring all of my custom setting is cura.
    • Making sure that fan settings were correct and speed was increased as well as the retraction settings that I have found work best for PLA.
    • Joplin I use for note taking, journaling and calorie tracking. Cura is used for slicing 3d prints.
    • Joplin worked pretty easy this time with a lot less fuss than the last time I installed it and I was able to get all my notes back from my backup of Nextcloud and even delete the items that were causing me so many problems last time around.
    • Installed Virtualbox 7.1 and moved the clone of the VM back onto the m.2 drive for faster access and kicked it off. It was working just as it should and seeing all the storage locations that it should because of the fstab fixing that I had done.
    • Last time I installed virtual box I had some issues with the graphics drivers that required me to downgrade but this time no problems were encountered and everything just worked. My VM clone started right up and because it could see all the storage that I had provided it in the correct locations it was able to continue on as though nothing had happened.
    • After that I setup my file mover automations. Move all .torrent from the Downloads folder to the watch folder location. That is after I changed the default download location to an external drive the way that it was before, in order to not use up all of the m.2 with pointless downloads. Then add another file mover for the same stuff located in a folder within Nextcloud so that I can start the process from anywhere.
    • Then setup all the automations based on where files from the downloader get loaded to. The VM only has access to limited locations so the rest is handled by crontab. Have to be able to separate all those Linux ISO’s somehow.
    • Next install some of the things that I use less often but still use
      • KDE Connect
        • I use this to control my computer from my phone. This is good when I am using an exercise bike or my gazelle.
      • Mate Desktop
        • Used with either x2go or chrome remote desktop
      • X2go server and client
        • Allows me to access all of my other computers graphically and to access this one from remote
        • Installing this also installs openssh server which I use for ssh and for sshfs
        • I did need to pull up the old backup of all the things from the previous /home to get my ssh files there so I don’t break automation on the other systems by that I mean the rsa keys
        • but because of the changes I do need to log into each of the remote systems that I use and then log back into this machine so that I can clear the key and accept the changes.
      • Chrome remote Desktop
        • only remote application that I have found works well from an android phone and on DEX
        • I am still looking for another solution because I am not a big fan of allowing chrome that much access
      • SCRCPY
        • Installing this also means installing ADB which then allows for wireless ADB
        • This allows me to bring up my phone screen on my computer and control it using my keyboard and mouse
      • OBS Studio
        • I use this for streaming to YouTube to cover the stream when Bill is not available, much like today
      • Droidcam
        • I use this when I am feeling lazy and don’t want to go get my webcam out of the other room. Just use my phone
      • V4L2LOOPBACK
        • This is a very important driver used by both droidcam and OBS in order to create virtual camera inputs and I will be putting in crontab scripts to reload these modules on every boot. It was also a bit of a pain finding the correct way to adjust the resolution so that it looks good
        • The mod is included with the install of droidcam or at least a version of it is but you also need to include some utils in order to be able to use it properly
          • after finding it in the droidcam folder you can
          • patch -p1 < [agent patch path]/resources/v4l2loopback/v4l2loopback.patch
          • sudo apt install v4l2-utils
          • sudo apt install v4l2loopback-utils
        • And then set the size with
          • v4l2loopback-ctl set-caps “video/x-raw, format=I420, width=1280, height=720” /dev/video9 &
            • The utils install is what gives you ctl
          • which is what will need to go into a automated started script along with
          • sudo modprobe v4l2loopback
          • This makes the image sent to the computer 1280×720 and really makes it look a lot cleaner
      • After that I installed mumble which I use for a couple of the other podcasts that I do and audacity for audio editing
      • Also I installed barrier to use along with my OneGX which was hooked up next to my main computer for while the install process was going on. Connected to my small portable monitor that is attached to my chair that I also use with DEX and my PiZero arcade
    • All of this does make me wonder if the problem with my old graphics card wasn’t just a programmatical issue and if I should switch back to test it out and get updated drivers. This graphics card that Bill sent me is better but doesn’t seem to have any support ongoing for it.

— Play Vibrations Transition Bumper —

Vibrations from the Ether

20 minutes (~5 minutes each)

  • Briezay
    • Hi Joe,
    • The guy who started the MintCast pod mentioned on the Changelog podcast that it was still going. From Mintcast I then found your podcast and email.
    • I’m looking to change careers. Moving into the Linux space seems more aligned with my values. (I’m reading “Working in Public” at the moment to understand opensource better). There is a bewildering amount of resources and content out there. Looking through it can feel quite unstructured. I was wondering if you had a previous pod that might have covered this? Or know of other resources? Or just have some advice you can share?
    • I’m in my mid 40-ies now. For health reasons, I should probably try and work from home most of the time. I understand it will take a few years to build the skills, qualifications and experience that I’ll need. I have funds for training etc.. and I have about 20h a week. I’m in the UK. I presume that a support role might be my best chance to get my foot in door. Maybe I could move on to an infrastructure job later. I presume part-time contract work might be a good starting point.
    • I left the IT industry 11 years ago. My career was mostly DB Design implemented on SQL Server. There was a lot of t-sql work, data cleansing and reporting. A lot of performance tuning too. I loved set-based thinking. Completed a major data migration project. Towards the end of that phase of my career, there was a lot of people management, managing outsourced dev projects in India and Poland and working as the scrum master for the local team in the UK. But I’d had enough of the corporate world and went off and did a simple physical job that paid well enough.
    • My .net dev skills are beyond rusty now. I still remember the days when we rolled out Cruisecontrol when it first came out (We moved on to Octopus Deploy). And I remember using MS Visual Sourcesafe before switching to git…
    • So that’s a bit about me. To rotate back into a tech job, I’ll have to start at the beginning again and make a fresh start. Any advice you could share would be deeply appreciated.
    • Best Regards,
    • Bob

— Play Check This Transition Bumper —

Check This Out

10 minutes

  • There is a really long, detailed, and highly informative article explaining The Fediverse on The Verge. If you’re interested, you should check it out.

Housekeeping & Announcements

  • Thank you for listening to this episode of mintCast!
  • If you see something that you’d like to hear about, tell us!

Send us email at [email protected]

Join us live on Youtube

Post at the mintCast subreddit

Chat with us on Telegram and Discord,

Or post directly at https://mintcast.org

Wrap-up

Before we leave, we want to make sure to acknowledge some of the people who make mintCast possible:

  • Someone for our audio editing
  • Archive.org for hosting our audio files
  • Hobstar for our logo, initrd for the animated Discord logo
  • Londoner for our time syncs and various other contributions
  • Bill Houser for hosting the server which runs our website, website maintenance, and the NextCloud server on which we host our show notes and raw audio
  • The Linux Mint development team for the fine distro we love to talk about <Thanks, Clem … and co!>

— Play Closing Music and Standard Outro —

Linux Mint

The distribution that spawned a podcast. Support us by supporting them. Donate here.

Archive.org

We currently host our podcast at archive.org. Support us by supporting them. Donate here.

Audacity

They’ve made post-production of our podcast possible. Support us by supporting them. Contribute here.

mintCast on the Web

Episode Archives

This work is licensed under CC BY-SA 4.0

This Website Is Hosted On:

Thank You for Visiting