<?xml version="1.0" encoding="utf-8"?>

<feed xmlns="http://www.w3.org/2005/Atom">
    <title>kzimmermann's articles</title>
    <link href="https://tilde.town/~kzimmermann/articles/atom.xml" rel="self" />
    <link href="https://tilde.town/~kzimmermann/articles/" />
    <updated>2024-01-09T21:31:20.958275Z</updated>
    <author>
        <name>Klaus J Zimmermann - @kzimmermann@fosstodon.org</name>
    </author>
    
    <entry>
        <title>30 days on a Raspberry Pi</title>
        <link href="https://tilde.town/~kzimmermann/articles/30_days_on_a_pi.html" />
        <updated>2022-06-23T22:12:45.435585Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>30 days on a Raspberry Pi</h1>
<p><img alt="Screenshot showing my current uptime on my Raspberry Pi: 30 days" src="/~kzimmermann/images/pi30days.png" /></p>
<p>I am certainly no stranger to using a Raspberry Pi as a desktop. Ever since I got my hands on a Pi 4, I embarked on a crusade about how can I use it to produce my very own fully-fledged miniature Desktop PC, complete with a graphical desktop environment and all the productivity that a <a href="https://tilde.town/~kzimmermann/articles/dontlikeitcreateit.html">Free Software</a> OS can offer. Initially, I was hooked in a way similar to when <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">I very started using Linux</a> more than 12 years ago at the time of writing. </p>
<p>A short while later, however, this excitement simply died out. My Pi was reduced to a NAS of sorts, a way to passively sit in the network and every now and then serve as a back up medium, or an intra network file transfer facilitator. Of course there were times to learn new stuff about maintaining a networked server (most of my usage of Linux until then was desktop-focused) and I learned a lot of things about deploying services on my own network. But a few weeks later, I set the Pi inside the cabinet next to my router and then... sort of forgot it even existed in my computing routine.</p>
<p>Earlier last month, however, I decided to give again the Pi a spin. Just to blow the dust from the top and see how it was doing, nothing really serious. Fast forward to now, and I have just completed a streak of over 4 weeks of using nothing but my Raspberry Pi as my personal computer. That is: 4 weeks of using it for <em>everything</em> including full browsing, messaging, and even work-related stuff - and I have absolutely no intentions to stop using it!</p>
<p>This has been a radical change even for me and I was left to wonder: how did this happen so radically and fast? Thinking back, I came up with three major reasons that kickstarted it:</p>
<ul>
<li>Settling on a definitive passive cooling solution that maintains the working temperature decently.</li>
<li>Adjusting the color temperature of my monitor to something warmer. </li>
</ul>
<p>And the biggest:</p>
<ul>
<li><em>Throttling the Raspberry Pi's CPUs back to 100% capacity of 1.5GHz</em>.</li>
</ul>
<p>In hindsight, this last one was pretty much the sidestepper; the single detail that accellerated the pi so it could feel fast enough again, close to my laptop. See, I'll admit that It took me a while to realize that the Raspberry Pi does not run at 100% of its capacity all the time - i.e. <a href="https://forums.raspberrypi.com/viewtopic.php?t=152549">it throttles down the CPU</a>. Back in the days when I tried my hand at its <a href="https://tilde.town/~kzimmermann/articles/rediscovering_puppy_linux_raspup.html">Puppy Linux flavor</a> I really thought the Pi felt slow because of its hardware limitations instead of the throttling. </p>
<p>Before I get ahead of myself and spill the beans on how to take control back of the CPU, let's see each one of these factors in detail: </p>
<h2>Arrange a definitive passive cooling solution</h2>
<p>People working with older Pi models, perhaps the ones before model 3, probably did not have to worry about the issue of cooling at all. Not only these were not usually chosen for desktop-like usage (cheap server and "IoT" being the frequent use-case), but they also did not produce a lot of heat due to the modest CPU specs. Using the default case or leaving it barebones in somewhere protected was more than enough for extended operations.</p>
<p>Fast-forward to the Raspberry Pi 4, using the standard case can make it uncomfortably warm for long-term continous usage. Plastic isn't a good heat conductor, and I've dealt with this issue in the past by using an external USB fan "sandwiched" between the lid and the base (no joke!). There are also some fans that are powered by the GPIO pins. These active coolers, however, have the annoying bad side of <em>noise</em> that gets louder over time as the motor gets used up. Plus, they're mostly overkill: you don't need that much cooling unless you're compiling things or gaming frequently.</p>
<p>So what do we do? In comes <em>passive cooling</em>: essentially get yourself a case that is made of metal, designed to allow maximum irradiation of the heat the Pi generates. Here's an example cast in aluminum:</p>
<p><img alt="A raspberry pi 4 case in black aluminum. No fans, only metal irradiating." src="/~kzimmermann/images/pi_case.png" /></p>
<p>Furthermore, this sort of case gets more efficient depending on how you position it against the ventilation in the room. The geometry of it tells us that its greatest surface areas are at the bottom and the top, so we can increase the passive heat dissipation by leaving it in a vertical position, with the LEDs and SD card entry against the table:</p>
<p><img alt="How I set my Raspberry Pi on the table" src="/~kzimmermann/images/pi_set.png" /></p>
<p>With this setup, CPU temperatures of the Pi settled around a comfortable 45~53 degrees celsius, making it very usable even when high loads (videos, javascript, etc).</p>
<h2>Set the color of your monitor to something warmer</h2>
<p>The second big peeve of the Pi is that you cannot use something like <code>redshift</code> to dynamically change the temperature of the screen to warmer (i.e. redder) colors in the evening. This has something to do with the proprietary Broadcom video driver <a href="https://raspberrypi.stackexchange.com/a/61942">not being completely compatible</a> with the the <code>randr</code> backend it uses, thus we're not able set the monitor colors or brightness from the software side. This is pretty annoying, especially as I've grown used to having <code>redshift</code> set that quite automatically in my laptops, and that bright blue light is pretty hard after a certain hour.</p>
<p>I've tried wearing orange-tinted glasses with it, or taking refuge in only the terminal (with warm fg colors) after the sun sets, all with mild success rates. Then I thought about something even easier: is there a way that I can set the intensity of the RGB colors in my monitor to taste? And voilà, problem solved.</p>
<p>Most monitors will have a stock "warm colorscheme" setting, but I found that these are often not warm enough to me (barely yellowing up the screen). Lucky for me, my monitor allows me to set the intensity of each one of the channels individually and save the preset, so I spent some time fine tuning it until it was like what I had in my laptop. Now every evening I just switch to night mode and can use my Pi into late night!</p>
<h2>Set the CPUs back to 100% capacity</h2>
<p>Ok, so here we are. the big point of the article. TL;DR is: you can put back your CPUs to 100% of its 1.6GHz capacity at all time, unlike the throttled down 600MHz that comes in stock. But first let's talk about some of my past failde attempts in this:</p>
<h3>Early attempts and frustration</h3>
<p>I was first hinted that this throttling issue when I tried the <a href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_linux_user.html">FreeBSD</a> image for the Pi. True to their honor of extensive documentation, there was a tip scribbled below the install notes about making FreeBSD more usable on the Pi. The solution was as simple as enabling the <code>powerd</code> service by adding this line to <code>/etc/rc.conf</code>:</p>
<pre><code>powerd="YES"
</code></pre>
<p>This apparently "normalizes" the power supply to the CPU to a "performance" level, and surely enough, after a reboot, the Pi with FreeBSD was a new machine: zero screen latency issues and a snappiness that resembled my laptop in pretty much every aspect!</p>
<p>I would have kept FreeBSD in there, since it's also an amazing OS, but there was one major caveat: lack of support for wifi and sound. Though I could've used a USB dongle for WiFi (not optimal, but hey), lack of sound was a large disappointment. And so this left me wondering about whether this could be doable in the Linux world, where these were not a problem. Thus began my search for a solution, which took much longer than I had ever thought.</p>
<p>If you blindly follow what's advised in the web when you search for <em>how to use full clock cycle of the raspberry pi</em>, you'll wind up editing the <code>/boot/config.txt</code> file, increasing the voltage, overclocking the CPU and rushing to buy a fan to cool off the poor little guy - not to mention the risk of damaging it in the long run. No thanks!</p>
<p>Every time I read a false positive article that preached overclocking, it was yet another day that I could not lift back the Pi to the full CPU capacity and the experience remained at an early-2000s speed of 600MHz per core. And a few days of it would be frustrating enough for me to yank it out of the monitor and go back to my laptop until I was curious enough to try again. And so this cycle remained - until I found a magic spell that changed everything.</p>
<h3><code>cpupower</code> changes everything</h3>
<p>Come 2022, I remained Pi-less for a few weeks while I moved to a different place, having only my laptop to work with. After the furniture was in place, I became eager to pick up the little guy and give it a go again, even with the expectation that it'd be a temporary thing. I mean, it always was, right? This time, I looked elsewhere for an answer: the Fediverse. What were my options to achieve 100% CPU capacity - if any? Magically, <a href="https://x0f.org/@FreePietje/107201504847188415">@FreePjete showed me a promising lead</a>: the <code>cpupower</code> command.</p>
<p>Long story short, the Linux kernel can change on-the-fly some aspects of how the CPU behaves during the run, and this includes the frequency at which it's running. The <code>cpupower</code> command is the knob that sets it. True to how get/set commands usually work, you can use to either find the current frequency your machine is running or set it to a new value - as long as it's within the hardware's limits. </p>
<p>In addition to the numerical value of the clock speed itself, there is also what's known as the "governor" of the CPU, which translates to the mode that your CPU is operating under, like "powersave" or "performance" in a typical laptop.</p>
<p>I was surprised to run <code>cpupower</code> for the first time in my Pi and discovered that it was configured by default to run into the powersave mode at 600MHz. Increasing the workload to "throttle" it back to full capacity did not work. Is there a way to manually crank it up to full capacity? And what is that full capacity anyway? Have a look at what <code>cpupower</code> gives me in my Pi:</p>
<pre><code>$ cpupower frequency-info
analyzing CPU 0:
  driver: cpufreq-dt
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 600 MHz - 1.50 GHz
  available frequency steps:  600 MHz, 700 MHz, 800 MHz, 900 MHz, 1000 MHz, 1.10 GHz, 1.20 GHz, 1.30 GHz, 1.40 GHz, 1.50 GHz
  available cpufreq governors: performance schedutil
  current policy: frequency should be within 600 MHz and 1.50 GHz.
  The governor "performance" may decide which speed to use
  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 600MHz (asserted by call to kernel)
</code></pre>
<p>Sweet! So we can clock it naturally all the way to 1.5GHz!</p>
<p>You can also query some special files under <code>/sys</code> to find out about these values - very much into the Unix principle. The files <code>/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq</code> and <code>/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq</code>, for example, indicate the hardware limits for the machine in terms of CPU cycles (values in kHz, by the way). And though I am not too quite sure of it, I have a hunch that you could even <em>set</em> the values by writing to either one of them.</p>
<p>Setting the frequency and the governor is done throught the following command:</p>
<pre><code>cpupower frequency-set --max $FREQUENCY --governor performance
</code></pre>
<p>Where <code>$FREQUENCE</code> is a value in kHz, so for maximum performance you'd use <code>1500000</code>.</p>
<p>I've also written a program in C to toggle the performance of the CPU between its maximum and defaults, which I added to my <a href="https://notabug.org/kzimmermann/simple-utils/src/master/cpu-utils">simple-utils</a> repository. Sure, I could've also used a shell script to do it, but this would eliminate any possibility of studying.</p>
<p>And so now my Pi flies again... sort of.</p>
<h2>Lessons learned</h2>
<p>Combining these three elements (monitor, smarter cooling and full CPU) made my Pi change from a simple hobby or toy to a full-fledged workhorse in the house. I have been using it as my desktop seamlessly, and it feels so natural that sometimes I legitimately forget that it's "just" an SBC. But this feeling is strongest when I'm using efficient Linux applications, which are mostly command-line based.</p>
<p>The truth is: a Raspberry Pi at this stage will probably never be as fast as your laptop. And compared to a buffed up desktop setup? No way. There is simply not enough CPU / GPU power in a 50-dollar piece of hardware to compete with those...</p>
<p>... but does this really matter, in the end?</p>
<p>We can dream up and fantasize a world of small computing where everyone can have their own portable minicomputer at an accessible price to run highly-efficient software and access smol-web-like decentralized services. Reality is surely a whole lot more complicated: we often are forced to use heavy-ass, Javascrippled websites that have no respect to efficient software, and our dream world is actually only a very limited portion of the actual web.</p>
<p>You can by choice filter up the majority of these by <a href="https://tilde.town/~kzimmermann/articles/alternative-frontends.html">choosing Free Frontends</a> or alternatives that are lighter. But in the end, you eventually are required to face the bloat one way or another. And how did the Pi fare in the end?</p>
<p>Miraculously, it was usable, even if annoyingly slow at some times. And the most impressive part: RAM usage never went over 3.2 GB - even when I fully loaded Office365 and the MS Teams webapp into it as part of my usability experiment. This shows me that, for an all-around usage, 4GB of memory is probably OK most of the time; no need to whip out the 8GB model just for that. I mean, you consciously <em>chose</em> to use a Raspberry Pi; I bet that you can comfortably use <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">efficient alternatives to clunky webapps</a>.</p>
<p>My biggest peeve perhaps is video. In a small window, it's actually fine, but no matter how small the resolution, it insists to lag once I put it to full screen. Annoying, perhaps, but hey, it <em>is</em> indeed just a 50-dollar piece of hardware in the end, right?</p>
<h2>The future</h2>
<p>I see nothing that breaks my deal of keeping using my Pi as a desktop system here in my house, short of when the SD card fails due to I/O wear. Perhaps I'll try different distros to tune the experience (I'm using Debian 11), but if I made over 30 days, this means I can go basically forever.</p>
<p>A few things that I think could be fun to try are:</p>
<ul>
<li>Port-forwarding and mirroring this tilde.town site here. Or, who knows, have this be my definitive server for my site from now on.</li>
<li>Put up an I2P router running 24/7. </li>
<li>Hosting an XMPP instance. Maybe on I2P?</li>
</ul>
<p>Oh yes, <a href="https://geti2p.net">I2P</a>. For the longest of times, it has been a goal to have a router running constantly, so I wouldn't have to do peer discovery every time I restarted my laptop. And now, thanks to this Pi, it looks like I finally can. The past weeks also had me rediscover this very quirky and cozy anonymous network built for file sharing, and I will probably write something about it. But for now I'm happy exploring it safely with an SBC!</p>
<hr />
<p>Have you ever tried using a Raspberry Pi (any model) as your desktop computer? How was your experience? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #33 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Tracking the world in about 80 lines of Javascript</title>
        <link href="https://tilde.town/~kzimmermann/articles/80_lines_javascript_track_world.html" />
        <updated>2022-08-24T08:59:38.204710Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Tracking the world in about 80 lines of Javascript</h1>
<p>This morning my news feed had an article which was quite an eye-opener: following some boasting about having detailed data on some five billion users around the globe, Oracle corporation has been issued a <a href="https://www.iccl.ie/news/class-action-against-oracle/">class-action lawsuit</a> by an Irish-based civil rights group claiming it has severely violated the privacy rights of entirely populations on the globe. </p>
<p>This announcement is interesting for several reasons:</p>
<ul>
<li>When we think about surveillance in general, often the first things that usually come to mind are government agencies. A piece of news like this serves as a very-needed reminder that nation-state-level surveillance capacity has to be powered by some effort from the private sector.</li>
<li>Even when the topic is surveillance from Big Tech, Google and Facebook are usually the first culprits, with the rest of GAFAM following. Oracle is not usually associated with this topic.</li>
<li>The article reminds us that even if a company's main business isn't a certain niche, Big Tech's game is <em>acquisition</em>. Microsoft is in the IT business, not HR business. Yet, it owns LinkedIn. </li>
</ul>
<p>This third point is especially important in Oracle's case and their "five billion" claim. A company that started out making RDBMSes does not have to specialize in tracking technology as long as it has a highly specialized arm that takes care of it - which brings us to <a href="https://en.wikipedia.org/wiki/BlueKai">BlueKai</a>. Founded 2008, this former startup specializes in tracking technology for marketing enhancements, and was acquired by Oracle in 2014 for 400 million dollars - a few crumbles compared to Oracle's 42 billion-a-year revenue. Since then, Oracle has quietly joined in on the surveillance business, managing to avoid the bad reps that Google et al have received.</p>
<p>However, one thing that the article does not mention is the method used, and years later, we can only wonder: how could they passively but surely have gathered so much data? Surely only some network-sniffing technology developed under top secret cover could've had such astounding reach, right? Well, turns out part of that technology is extremely trivial and, in fact, has been with us since the mid-90s or something: <em>Javascript</em>.</p>
<p>Bluekai works by sniffing user data from snippets of javascript embedded into webpages. Leveraged by the desire of businesses worldwide of doing "market research" to boost sales and the ubiquitous ignorance about what Javascript even <em>is</em>, this sort of tracking has become omnipresent the same way as the Facebook Pixel or Google Adsense. And it's simple and trivial to implament thanks to browsers' naïveté in processing unchecked code from the web.</p>
<p>How trivial, you ask? Here's an example snippet from Github, where in <a href="https://gist.github.com/rgonsalk-oracle/8d5de0195a70b18d19ea8ead074f4689">about 80 lines of uncompressed Javascript</a> you can prep, prime and send rich data from a webpage's visitor straight to Bluekai's servers where it's added and processed by the gargantua. Paste that snippet into a webpage, and all it takes is one load to put your visitor in the bag.</p>
<p>Fortunately, though, the fix is likewise not complicated. For those of you who have been heeding the warning against <a href="https://www.gnu.org/philosophy/javascript-trap.html">the Javascript trap</a>, you might be safe already: just block their script from ever running in your browsers. Use an extension like <a href="https://noscript.net/">NoScript</a> or <a href="https://addons.mozilla.org/en-US/firefox/addon/umatrix/">uMatrix</a> to prevent it from being executed by your browser, or go even deeper and preventing it from loading in your computer at all with a <a href="https://github.com/StevenBlack/hosts"><code>/etc/hosts</code>-level blacklisting</a>.</p>
<p>Hence, privacy-conscious web browsing is still enough to ensure protection against this, as it has been against several other sorts of threats online. However, this is in no way a reason to put aside and forget Oracle and the real threat lurking out there: billions of unsuspecting people surfing the internet are still precisely being tracked in live-time and shame is for those who enable such technology to happen. Keep an eye out for other threats like this in the surveillance arms race.</p>
<hr />
<p>Did you know about Oracle's tracking capacity before news of this lawsuit came forward? How do you prevent it from reaching you? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #35 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Alice and Bob go Public: when a touch of art can help make a point</title>
        <link href="https://tilde.town/~kzimmermann/articles/alice_and_bob_go_public_celebrate_encryption.html" />
        <updated>2022-10-25T21:17:33.177149Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Alice and Bob go Public: when a touch of art can help make a point</h1>
<p>Quick post to (belatedly) illustrate <a href="https://www.internetsociety.org/events/global-encryption-day/2022/">Global Encryption Day (October 21 2022)</a>: presenting <strong>Alice and Bob go Public</strong>.</p>
<figure>
    <img src="https://tilde.town/~kzimmermann/images/alice-and-bob-go-public.jpg" alt="Alice and Bob go Public - a drawing of public-key encryption"/>
    <figcaption>
        Alice and Bob go Public - an illustration of some of the main concepts of Public-key Cryptography. Encryption isn't just for hiding or those who have something to hide. Encryption is an individual right and must be protected.  Credit for the artwork: `@maluzeando.lettering`
    </figcaption>
</figure>

<p>It's been a while since I posted any art, but I feel that there should be more of it covering the topic of cryptography - particularly the part of end-to-end encryption. There's nothing scary about it, or illicit. Encrypting private conversations are as natural as the desire to have them face to face with someone - be them your friend, your relative, or your doctor.</p>
<p>And it shouldn't take a single day off the calendar to make us celebrate encryption. Celebrate encryption today, and every day - by practicing it!</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Alpine Linux on the desktop: awesome feat or fool's errand?</title>
        <link href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html" />
        <updated>2021-05-21T01:38:14.310493Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Alpine Linux on the desktop: awesome feat or fool's errand?</h1>
<p>Over the past month, I once again decided to explore an interesting Linux distribution: <a href="https://alpinelinux.org">Alpine Linux</a>. Much like <a href="https://tilde.town/~kzimmermann/articles/freebsd_desktop_part_2.html">my previous adventures with FreeBSD</a>, this started out as sheer curiosity on what seemed a lightweight distribution suitable for my Raspberry Pi, but eventually impressed me enough to the point that I wanted to try it out on one of my machines for more "real work."</p>
<p>I couldn't be more in love with it, at least not in a software-realted manner.</p>
<p>Tiny footprint, fast in every sense, modularity and usability are just a few of the adjectives that I can list to talk about it. From a development/testing environment in a virtual machine or Docker image all the way to a full Desktop OS with all the end-user functionality included, Alpine Linux excelled and impressed me in in several levels. So much, in fact, that Alpine became the first Free Software project that I <em>actively want to support</em> and contribute to instead of only being a passive user of it, in a true community-driven fashion.</p>
<p>Thus, I set out to put it through the ultimate test: <em>turn it into a daily-driver-level OS</em>. Although I was confident it would be possible, there was only one thing that initially held me back: Alpine Linux self-proclaims that it's a distribution intended primarily for secure operations in embedded applications and servers, and all of its design confirms it. Busybox-based core utilities instead of standalone programs, load-from-RAM system that reduces disk usage, musl libc with a more restricted set of applications... all far from my use case in the spectrum. How hard would making it a desktop be?</p>
<figure>
    <img src="https://i.pinimg.com/originals/f7/c4/70/f7c470f69bc9706bde6202eb6d033145.jpg" alt="a tricycle pulling a heavy log" />
    <figcaption>
        Tiny and Fast Software... pulling a huge desktop load. Fool's errand or genius use of resources?
    </figcaption>
</figure>

<p>This post describes my steps and procedures I followed when creating a minimal, but fully functional Desktop Machine with all my basic requirements, similar to those described in my FreeBSD essay. As much of it is pretty minimal and without much "help" provided, I wouldn't exactly recommend it to beginners, but you could try it out if you're looking for a challenge.</p>
<h2>The target implementation</h2>
<p>Borrowing from my previous experience in turning FreeBSD into a Desktop, here are my requirements for the final implementation of my Alpine Linux desktop:</p>
<ul>
<li>Have a Graphical Environment available, DE or not.</li>
<li>Be able to manage power and sessions (login or out, screen locking, suspend etc)</li>
<li>Be able to manage multiple types of connections, especially WiFi</li>
<li>Manage and install my required software with ease of maintenance</li>
</ul>
<p>There is lots of room to improve on these, but for a bare-minimum desktop, this ought to be enough.</p>
<h2>Base Installation</h2>
<p>I previously covered the <a href="https://tilde.town/~kzimmermann/updates/20210423_1107.html">installation of Alpine Linux</a> in a Virtual Machine in a video I posted on <a href="https://diode.zone/videos/watch/b4946a75-8c38-44e4-8109-00d1b32b157b">my Peertube channel</a> and it's a pretty straightforward process. The good news is that, as long as you intend to use Alpine as the only OS in your hard drive, pretty much the same steps for a VM apply.</p>
<p>First, burn the ISO to a USB stick, and boot it in the target machine. Although Alpine has quite a few choices of architectures, you can safely choose the x86 or x64 ISO and just go with it. They are all hybrid, and work with USB sticks.</p>
<p>When the system has booted, log in using the username <code>root</code> and no password. You're going to change the root password later throughout the installation process. The live system is pretty minimal, only the console and a few utilities, but the bread and butter is the installer script: <code>setup-alpine</code>. This script itself calls out to other <code>setup-*</code> scripts that configure specific aspects of the system, which you can see by typing <code>setup-</code> and hitting Tab. </p>
<p>To start the installation, run <code>setup-alpine</code> and answer the questions at the prompts. The questions are pretty much straightforward for a seasoned user, but there are two steps that require some attention here.</p>
<p>First, at some step of the script, the installer will list about 50 different Alpine mirrors, and ask you which one you want to use with <code>apk</code>, Alpine's package manager. You even the option of testing which ones are closer/faster for you. </p>
<p>From my experience, however, some servers on this list are outdated and offline, and the installer will fail silently if you choose them even though they seem like a faster option. Therefore, I recommend simply choosing <code>1</code> here, and going with the default Alpine CDN, which is your best bet for uptime. Most Alpine packages are not very big anyway, so you're probably going to be fine.</p>
<p>Second, when the installer asks you which "installation mode" you'd like to use, choose <code>sys</code>. Alpine can run from what's called "diskless" mode, where stuff is loaded into RAM upon boot, and everything runs within RAM until explicitly committed by the user back to the disk. This is similar to how other minimalist distros like <a href="http://www.puppylinux.org/">Puppy linux</a> work, and helps both speed up slow computers by cutting out the need to load data from the disk and reduce the wear in NAND flash storage used in embedded systems. </p>
<p>However, I find this unnecessarily complicated for a standard PC use case, where disk and memory are more robust. The "traditional" install mode, reading and writing normally to disk, is called "sysmode" and its option is <code>sys</code>. Choose the label of your disk (example: <code>sda</code>) when prompted about "where to save configs."</p>
<p>The installer at the end of everything will cue that is has completed, and at that point you may reboot. Congrats, Alpine is installed!</p>
<h2>Further tweaking</h2>
<p>Ok, installation was pretty easy, right? Upon reboot, though, the system looks pretty minimal, like a default Arch Linux install. Time to start some tweaking so we can meet the requirements above. For starters, I suggest doing two things:</p>
<p>First, log in as root and create a normal user for yourself. Unlike in Puppy, we're going to run everything unpriviledged, but add the possibility of administrative work via <code>sudo</code>:</p>
<pre><code>adduser kzimmermann 
adduser kzimmermann wheel
</code></pre>
<p>Adding yourself to <code>wheel</code> will allow you to use sudo after its installation and configuration. But don't log out of root just yet!</p>
<p>Second, enable software sources other than Alpine's main repository. By default, apk ships with only the <code>main</code> repository enabled, which is the software developed and maintained by the official developers. However, the number of packages there is pretty small, and sometimes even misses basic things like the <code>mutt</code> email client. By enabling the community-contributed and testing repositories of Alpine, however, you have access to a much larger number of packages, about the same as the other mainstream distributions.</p>
<p>To enable these third-party sources, edit the <code>/etc/apk/repositories</code> file and uncomment or add the following lines:</p>
<pre><code># note, if you chose a different mirror, the prefix will be different:
http://dl-cdn.alpinelinux.org/alpine/v3.13/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
</code></pre>
<p>Now reload the repository cache :</p>
<pre><code>apk update
apk upgrade
</code></pre>
<p>And now you can install pretty much any package you need - including Xorg which is the target of this post's implementation. </p>
<p>This is also a good time to change the default font of the Linux console if it's too small for your high-resolution display. Alpine ships with considerably more fonts than, for example, Arch and FreeBSD, and you can look into <code>/usr/share/consolefonts/</code> to find one that suits you. Once you decide on one, run <code>setfont your_desired_font.psf.gz</code> to set it one-time, or edit <code>/etc/conf.d/consolefont</code> and add this line:</p>
<pre><code>consolefont="YourFontFile.psf.gz"
</code></pre>
<p>Then add this change permanently into your init sequence with the command <code>rc-update add consolefont boot</code>.</p>
<h2>Going graphical</h2>
<p>Once again, there's a convenient script that automates most of the required steps in installing Xorg: <code>setup-xorg-base</code>. Run it as root to install everything that is required for the graphical environment, so it's a little easier than in FreeBSD. Once installation is complete, log in as your own user and test the install with the command <code>startx</code>.</p>
<p>If the default X environment (<code>twm</code>) and terminals open, there are no problems with the setup, and you can exit this test environment. At this point, you may install your own window manager and start installing other software you wish. I chose my recent favorite Fluxbox, but whatever you choose, make sure to add its execution to your <code>~/.xinitrc</code>, so that you can start it with <code>startx</code> from the console.</p>
<h3>Power and session management</h3>
<p>In order to better manage the desktop sessions (i.e. suspend and hibernate the machine), install the package <code>elogind</code>. This will enable you to suspend using the command <code>sudo loginctl suspend</code>, which <a href="https://tilde.town/~kzimmermann/updates/20210428_1255.html">I outlined in a post before</a>. And as some beautiful people correctly pointed out, <a href="https://wiki.alpinelinux.org/wiki/Xfce_Setup#Allowing_shut_down_and_reboot">it's possible to do it sudo-less by combining it with polkitd</a>. I have not tried to couple this with power manager tools to get a nice "close the lid to sleep" effect, but I presume it might be possible - i.e. some DEs like Xfce.</p>
<p>Since I don't use a display manager, I simply go with <code>xscreensaver</code> to lock my session. You may use what your DE provides as well.</p>
<p>Brightness control so far is still a little clunky for me. Like in FreeBSD, I resort to the command line to change the brightness, but unlike there, there's no <code>intel_backlight</code> package apparently. Instead, what you can do to control the brightness is change the value of this file as root:</p>
<pre><code>echo $VALUE &gt; /sys/class/backlight/intel_backlight/brightness
</code></pre>
<p>There's a maximum value possible to it described in its neighboring file <code>max_brightness</code> so the $VALUE must be between 0 and that (here, for example, it's 975). To speed up the command, I created the following script, that also standardizes the possible values to 0 to 100:</p>
<pre><code>#!/bin/sh
#
# @backlight.sh
# This script must be run as the root user (su -c) in order to work
#
# Pass a number between 0 and 100 as $1 to set up brightness in your laptop
#

if [ $(echo $USER) != "root" ]
then
    echo "You must run this as root (su -c) to set the brightness."
    echo "sudo doesn't work. Dunno why. Don't insist, it's fruitless..."
    exit 1
fi

if [ -z "$1" ]
then
    echo "USAGE: $(basename $0) BRIGHTNESS"
    echo "Where BRIGHTNESS is a value between 0 and 100"
    echo "This script must be run as the root user "
    exit 1
fi

MAX="$(cat /sys/class/backlight/intel_backlight/max_brightness)"

TMP="$(echo "scale=2; ($1 / 100) * $MAX" | bc )"

NEW="$(echo $TMP | awk -F "." '{print $1}')"

echo $NEW &gt; /sys/class/backlight/intel_backlight/brightness &amp;&amp; \
    echo "Brightness set to $NEW"
</code></pre>
<p>This allows me to set the brightness (as root) as <code>backlight.sh 25</code>, for example. Yet, there are some other issues with this, like the fact that calling this with sudo doesn't work either - <code>su -c</code> or nothing, my friend. </p>
<p>I'm thinking that somewhere along the setting up of a DE, there might be a way around this, perhaps with some program running setuid to allow this call, or adding yourself to the <code>video</code> group could do the trick too?</p>
<h3>Managing WiFi</h3>
<p>As a laptop user, there may be times when I need to log into a different WiFi network, <a href="https://tilde.town/~kzimmermann/articles/laptop_buying_tips.html">despite my use case being almost always desktop-like</a>. I must be able to log into another unknown network should I bring my machine there. </p>
<p>Being able to easily change networks in a graphical way is nice to have, but not exactly a hard requirement in my book. The Alpine repository includes all tools for managing WiFi, including the more user-friendly NetworkManager with its applet for the system tray. You can get the full stack by running the following:</p>
<pre><code>apk add networkmanager wpa-supplicant dhcpd
</code></pre>
<p>To have them all run at startup time for a true desktop-like session, add them to your OpenRC init:</p>
<pre><code>rc-update add wpa-supplicant
rc-update add dhcpcd
rc-update add networkmanager
</code></pre>
<p>Initially, however, I went in mostly raw and decided that from <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">my previous experiences in the command-line only</a>, only using <code>wpa_supplicant</code> would be enough - and I wasn't wrong.</p>
<p>A little bit of an alphabet soup in the beginning, but the command is incredibly straightforward once you issue it a few times:</p>
<pre><code>wpa_supplicant -B -Dwext -wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf &amp;&amp; dhclient wlan0
</code></pre>
<p>Where you have your config file <code>wpa_supplicant.conf</code> already generated by <code>wpa_passphrase</code>. <code>dhclient</code> will run after if the authentication is successful, and then hopefully you'll get an IP that will be yours to use throughout your lease.</p>
<p><img alt="screenshot of my alpine desktop" src="https://tilde.town/~kzimmermann/images/alpine_desktop.png" /></p>
<p>And with that, all my bare requirements for a Desktop distro have just been fulfilled!... but that's not to mean it's without problems.</p>
<h2>Usability gotchas and compensation</h2>
<p>First of all: this installation does <strong>not</strong> support full disk encryption. At no step during installation you are prompted for setting up some sort of encryption, and given the project's goals, it's clear that it's not the officially supported way. </p>
<p>As per the Alpine Wiki, however, it is possible to deviate a little from the install scripts to set up an encrypted volume on which to install the OS, concluding into a setup similar to other modern distros. I should try this as my next step.</p>
<p>The other big "gotcha" I had was that upon starting X, there were no <strong>fonts</strong> to be found in the system, and as a result no graphical application was able to show me any sort of text, except for xterm (everything else was shown as a square glyph). </p>
<p>You could fix this by just searching (<code>apk search -v font</code>) and installing any font package, but additional configuration is needed to distinguish between serif, sans and monospace typefaces. If you don't, then you might find that everything becomes one font only, and you might end up with for example a terminal emulator with a serif non-fixed width font (ewww). Thankfully, there was an easy alternative pointed out by people on the fediverse: just install the preconfigured <code>ttf-dejavu</code> font package. Log out and back in the graphical session and everything is taken care of.</p>
<p>The final, but less worrying, gotcha is that unlike other distros, if you want to go graphical smoothly in Alpine <strong>you should install an icon set</strong>. Not for ricing reasons, just plain usability. The default icon set is very limited and does not cover icons for things like buttons and other graphical widgets that applications use, so if you use something like GNUmeric, you'll find that you can't distinguish which button is which.</p>
<p>You can go to <a href="https://gnome-look.org">GNOME Look</a> and download a full icon set and easily fix this. I like the Infinity theme, but any full set should cover these gaps (that is, sets that can cover application widgets as well as desktop icons).</p>
<h2>Conclusion</h2>
<p>Alpine Linux is a great distribution that offers speed, flexibility and security by default. Although not primarily intended to a desktop audience, with a few tweaks it can be made fully functional Desktop (or laptop) OS with the base requirements working as good as other mainstream distributions.</p>
<p>Although I wouldn't exactly recommend it to an absolute beginner, Alpine is a great distro for minimalists who would like to try something that is a little more challenging and command-line oriented. I've been trying to daily driving it for the last weeks and still discovering new things about it (all pretty nice so far), with no major warts or showstoppers yet.</p>
<p>If you are curious to try it out, here's my suggestion: <em>run it in a virtual machine!</em> The download image is just over 100MB in size, about the same as a Puppy Linux ISO, and you can run it confortably with about 300MB RAM to spare, even with X. </p>
<p>And if you like it, get an older machine to try it out hands on. I found that Alpine did what previous distros didn't, and made fully usable a 4GB RAM computer as a desktop, including heavy usage of Firefox with video and dozens of tabs, and other graphical programs. <a href="https://tilde.town/~kzimmermann/articles/digital_minimalism.html">Software bloat</a> be damned, Alpine is the real deal here.</p>
<hr />
<p>Have you ever tried Alpine on a Desktop? How was your experience? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<p>(I would also highly recommend that you check the <a href="https://wiki.alpinelinux.org/wiki/Setting_up_a_laptop">Alpine Wiki entry on setting up a laptop</a> if you venture into this adventure, as they have a very detailed writeup.)</p>
<hr />
<p>This post is number #16 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>List of alternative frontends to popular web sites and services </title>
        <link href="https://tilde.town/~kzimmermann/articles/alternative-frontends.html" />
        <updated>2021-01-06T08:58:37.505377Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>List of alternative frontends to popular web sites and services</h1>
<p>I don't know if anyone else has made this before, so I just kind of went ahead and did it: <a href="https://notabug.org/kzimmermann/alternative-frontends">a list of alternative frontends to popular websites in Notabug.org</a></p>
<p>Free and federated websites and services are a great idea, but let's face it: most of them still lack the trove of content that closed data silos like Google and Facebook own. However, there are still some ways that you can still make use of them without completely compromising your privacy: use an <strong>alternative frontend</strong>. </p>
<p>These are 3rd-party websites that either scrape or use the main service's API to receive and present the same data you'd find there, minus all the tracking and usually in a much more lightweight manner. The very best of these allow you to browse the website almost completely anonymously.</p>
<p>The trade off is usually that the service becomes read-only (after all, posting would mean identifying yourself). If this is not an issue (most of the time it isn't for me), then the experience is smooth and straightforward.</p>
<p>This list presents as many alternative frontends as I know of, with a browserless alternative (desktop or command-line program) whenever possible.</p>
<h2>Search engines</h2>
<h4>Searx</h4>
<p>A meta-search engine that searches other search engines and returns results without you having to send your data to them. Searx can search all major search engines such as Google, Duckduckgo, Yahoo, Bing and Yandex, as well as files, images and provide instant answers a-la Google, while <em>not</em> requiring Javascript to work.</p>
<p>The project is under active development and there is a vast ecosystem of instances you can choose from, including hidden Tor services.</p>
<ul>
<li><a href="https://searx.space/">Publicly-listed instances</a> (instance matchmaker)</li>
<li><a href="https://github.com/searx/searx">Project home and source code</a></li>
</ul>
<h4>Browserless alternative</h4>
<p>Searx also offers webfeeds for results. You can have them in CSV, RSS or JSON formats for easy parsing or reading. To obtain them, run <code>wget https://your_instance/search?q=your_query&amp;format=json</code>.</p>
<p>From that point and on, it's just a matter of parsing and presenting the content for easy viewing in the terminal.</p>
<h2>Google-related</h2>
<h3>Google search</h3>
<p>See <a href="#searx">Searx</a> above.</p>
<h3>YouTube</h3>
<h4>Invidious</h4>
<p>A frontend to watching YouTube in the browser without any of the tracking or ads in it, and that does <em>not</em> require Javascript to work. </p>
<p>You can watch any YouTube video with it by simply substituting the <code>youtube.com</code> part of the URL with an Invidious instance's address, complete with Playlist playback support. You may additionally create an account in an invidious instance that allows you to save favorites and playlists, as well as revisit watch history with no additional information from you required other than a username/password.</p>
<p>You can choose to only stream the audio (great for listening to music!) by adding <code>&amp;listen=1</code> to the end of any Invidious URL. Some instances may also allow explicit downloading of video, but otherwise you can always right-click and save the video or audio through your browser.</p>
<p>The downside is that some livestreams may not work correctly, but fallback links to YouTube are provided in every video. Also, due to the limited number of instances, some may go offline unpredictably.</p>
<p>The project is under active development, but there are not as many public instances available to choose from.</p>
<ul>
<li><a href="https://invidio.us/">Public Instance list</a></li>
<li><a href="https://github.com/iv-org/invidious">Source repository</a></li>
</ul>
<h4>Browserless alternatives</h4>
<p>The hacker's way is: install <a href="https://yt-dl.org">youtube-dl</a> and <code>mpv</code> then run <code>mpv --keep-open https://youtube.com/watch?v=YOUR_VIDEO</code>. Will save you a lot of resources with additional configuration of the <code>mpv</code> command. You can also straight up download the videos with <code>youtube-dl</code>.</p>
<p>Programs such as <a href="https://flavio.tordini.org/minitube">Minitube</a> also are a graphical desktop client for viewing YouTube.</p>
<h2>Facebook-related</h2>
<h3>Instagram</h3>
<h4>Bibliogram</h4>
<p>Bibliogram is a Free Software frontent to viewing Instagram without the need to register for an account (needed even for public profiles in the official website). Unlike other hundreds of unofficial "viewers," Bibliogram stands out in that it does <em>not</em> need Javascript to work (only used for UI improvements). To use it, simply switch the <code>instagram.com</code> bit of the URL with any of the instances' address.</p>
<p>The project seems to be undergoing constant development, but there still seems to be some usability bugs, like trying to view further pages of an Instagram profile (seems to fail starting from page 2). There are also a limited number of public implementations around.</p>
<ul>
<li><a href="https://sr.ht/~cadence/bibliogram/">Project website</a></li>
<li><a href="lis://git.sr.ht/~cadence/bibliogram-docs/tree/master/docs/Instances.md">Public instance list</a></li>
</ul>
<h2>Twitter</h2>
<h3>Nitter</h3>
<p>Nitter is a lightweight, trackerless frontend for Twitter that does not need Javascript to work. To use it, simply substitute <code>twitter.com</code> with any instance's address in the URL bar, or browse straight to the instance and look for any user or tag.</p>
<p>Nitter presents quite a diverse ecosystem of instances (including hidden services) and is also incredibly configurable from the frontend. I guess the only downside I can point out is that you can't write anything to it (like any other frontend here).</p>
<ul>
<li><a href="https://github.com/zedeus/nitter/wiki/Instances">Project website and source</a></li>
<li><a href="https://github.com/zedeus/nitter/wiki/Instances">Instance list</a></li>
</ul>
<h4>Browserless alternatives</h4>
<p>Many programs can use the Twitter API to fetch and post content from your desktop or even the command-line. I've used in the past <a href="https://github.com/identicurse/IdentiCurse">identicurse</a>, though it seems to be largely abandoned.</p>
<h2>Reddit</h2>
<h4>Teddit</h4>
<p>Teddit is a lightweight frontend that closely emulates the "old" interface of Reddit, and that does not require Javascript to work. You can browse subreddits, search for posts and enjoy a peaceful experience without any ads while sticking to a sane interface design.</p>
<p>Unfortunately, there doesn't seem to be a way to save favorite subreddits or do persistent customization besides choosing a theme yet, but it's still a great way to browse through Reddit anonymously. The instance ecosystem is still very small as of January 2021, so instance blocking can still be an issue.</p>
<ul>
<li><a href="https://codeberg.org/teddit/teddit">Project website and instance list</a></li>
</ul>
<h4>Xeddit</h4>
<p>An aggregator page that returns the front page results of Reddit as a proxy, but does not attempt to proxy any of the posts themselves (they open in old.reddit.com).</p>
<p>I'm not sure why would anyone choose to use this over the much more complete <a href="#teddit">Teddit</a>, but it's out there if you need it. The project seems kind of dead (latest commits from 2018)</p>
<ul>
<li><a href="https://gitlab.com/xeddit/xeddit/">Project website</a></li>
</ul>
<h2>Generic web proxies</h2>
<p>If you don't wish to automatically be tracked by a webpage you don't know or trust but don't have Tor or VPN available, a web proxy might be able to help you - slightly.</p>
<p>Some <a href="#searx">searx</a> instances offer a proxy service called <a href="https://github.com/asciimoo/morty">Morty</a> that will return a proxied version of any page from the search results without javascript, but still showing CSS and images. Only a few instances support this, though.</p>
<p>You can also indirectly use the <a href="https://archive.org">Internet Archive</a> as a proxy by prepending <code>https://web.archive.org/web/</code> to any address you wish to visit. This is intended for archival process, but can be used to proxy the connection between you and the site. Be careful though, as the Internet Archive is not a CDN and this is not its primary use intended. Be responsible with the requests!</p>
<h2>Don't forget federated services!</h2>
<p>The fact that these frontend services help out on privacy by no means mean that we should forget about the other <em>huge</em> pillar in the fight for Freedom: <strong>producing Free Content</strong>.</p>
<p>Having private or anonymous access to content is good, but much much better is to port or create content that is freely available across mutliple servers in the internet without the need to jump any hoops. That's why we need to be actively producing and posting as many new contents as possible to federated services like <a href="https://joinmastodon.org/">Mastodon</a>/<a href="https://pleroma.social/">Pleroma</a>, <a href="https://diasporafoundation.org/">Diaspora</a>, <a href="https://joinpeertube.org/">Peertube</a>.</p>
<p>This is a two-front fight: active and passive approaches are equally needed!</p>
<h2>Contribute and help out!</h2>
<p>Got any other frontend that I have not covered here? Open an issue at my <a href="https://notabug.org/kzimmermann/alternative-frontends">Notabug.org repo</a> and let me know! I'd love to hear more.</p>
<p>This fight is not over, and it just will keep getting better.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>A beginner's guide to the  Arch User Repository</title>
        <link href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html" />
        <updated>2021-02-04T05:35:09.244345Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>A beginner's guide to the  Arch User Repository</h1>
<p>Ever since I <a href="https://fosstodon.org/@kzimmermann/105354896572259355">acquired my used Thinkpad</a>, I wanted to start fiddling with a different Linux Distribution. Something to both expand my horizons of comfort after have lived with Debian for a good six or so years, and also to expand to a more radical environment away from Debian's rather conservative environment.</p>
<p>The answer came in <a href="https://archlinux.org">Arch Linux</a>, one of the first Linux Distributions to implement a "rolling release" model that challenged the previous snapshot-based models of the traditional distributions. It's also a very popular distribution with power users who want to customize exactly how their systems are implemented. And to switch from Debian Stable, probably the most conservative of all distributions all the way to the Arch side of the spectrum was something that seemed very radical.</p>
<p>The reality was actually much less dramatic, partly because in more recent years, a fork of Arch Linux called <a href="https://artixlinux.org">Artix</a> was released for users who wanted the Arch experience without the <code>systemd</code> init system. Artix also has the added benefit to be slightly better configured out of the box, with a minimal usable system with a Desktop Environment after a fresh install (similar to how Manjaro is deployed). And it was Artix that I eventually chose in this new endeavor.</p>
<p>Though using an Arch-based distro was pretty straightforward for a seasoned user, there was one striking difference that I felt contrasting with the Debian-like environment: the <a href="https://aur.archlinux.org">Arch User Repository</a>. This is because Arch Linux separates the official project packages from the software that is contributed by the users themselves in another place completely - thus being called Arch User Repository (AUR).</p>
<p>Unlike other distros, however, <em>Arch does not even include them in the reach of your package manager</em>. In Debian or Ubuntu, you might edit <code>/etc/apt/sources.list</code> to add nonfree or "universe" repositories or PPAs, and after a cache update, all these packages are automatically manageable from apt. Not so with Arch - the AUR is off-limits to <code>pacman</code>, and you must install software from there in a more "manual" way, which confuses many beginners used to a more automatic method.</p>
<p>It took me some time to finally grok the way installing from the AUR works, not because it's complicated, but rather because I chose to avoid the unknown until I had no choice. And much like the rest of my experience with Arch, it also turned out to be quite easy, even without helpers (yay, yaourt, paru, etc). That's why I'm summarizing the important findings in this point, so that other beginners may not fear the AUR and learn quickly how to install from there. </p>
<p>Read on!</p>
<h2>How AUR works</h2>
<p>The single largest difference between installing stuff from AUR versus contribution repositories in other distros is this: AUR <em>does not</em> store binary precompiled packages in a server somewhere - you have to <em>build them by yourself</em>.</p>
<p>This is strikingly different from, for example, what you see in Debian. There, you download binaries from the repository server, verify the signature to see if it matches the maintainer's, and if all looks good, copy the binaries into their respective places. Finito. With the AUR, that process is almost reversed, following a build process that resembles that of source-based distributions.</p>
<p>If you look at it from a purely quantitative perspective, the AUR is a much larger repository than the official Arch packages, and this explains why so many times when you search for something via <code>pacman -Ss package</code>, you'll find nothing, with the answer on the wiki being "you can install it from the AUR." This is especially true when it comes to games: Arch has a myriad of them, arguably larger than any other distro, but few make it to the official repos. Once again, AUR is the answer.</p>
<p>The overall install process has four major steps that thankfully can be mostly automated using Arch's tools:</p>
<ol>
<li>Clone the AUR git repository of your desired package.</li>
<li>Follow the build instructions of the git repo's PKGBUILD file.</li>
<li>Install additional dependencies via pacman.</li>
<li>Build from source and install the package via pacman.</li>
</ol>
<p>These steps might sound familiar to you if you've ever compiled a package from the source code on Linux (see my <a href="https://diode.zone/videos/watch/22a566fa-464b-4ddd-9e71-38340208bf14">video on SC-IM</a> for an example), and that's because it's exactly what you're doing here. </p>
<p>However, unlike the traditional way of building from source, using the AUR allows you to more or less automate most of that process, which greatly facilitates it. That build process also makes use of a "feigned" root environment, that makes it unnecessary to run it as the root user (a-la sudo make install), but requires you to install the <code>fakeroot</code> package first:</p>
<pre><code>sudo pacman -S fakeroot
</code></pre>
<p>Let's look at the process more closely now:</p>
<h2>Installing from AUR</h2>
<p>Your starting point to install anything from the AUR is to search for the package you want at the <a href="https://aur.archlinux.org">AUR database</a>. You have to use your browser for this, as pacman does not search the AUR with the <code>-Ss</code> flag. </p>
<p>As you do so, take note of the status and other health indicators of the package; packages in the AUR are not part of the official distribution, and therefore do not get screened for quality or security. Is the package orphaned? Abandoned? Gets regular updates? Do the comments state build errors or difficulties? The more popular packages usually do not suffer from these problems, but it's good to check regardless.</p>
<p>Once you decided on the package to install, find its Git repository link and clone it in the terminal. It's presented on the <code>https://aur.archlinux.org/package-name.git</code> format.</p>
<pre>
git clone https://aur.archlinux.org/package-name.git
cd package-name/
</pre>

<p>You'll find that usually the only thing inside the cloned directory is a script named PKGBUILD. This is the "instructions" on all the steps required to build that package, all the way from installing dependencies, downloading external assets, and compiler flags. Think of it as a Makefile on steroids, which is great since the next step is to simply run the following command to build and install the package:</p>
<pre><code>makepkg -si
</code></pre>
<p>And that's pretty much it. <code>makepkg</code> does its magic handling all the dependencies and compilation for you (it might ask you to install things via pacman along the way), using fakeroot to abstract parts that would require root permissions to proceed. If you find errors along the way (ie. with the configure script), read them and try to understand what dependencies are missing, and fill in those gaps with pacman. Usually after this the build is pretty smooth.</p>
<p>Be warned, though, that since you're compiling everything, it will take much more time compared to pacman. Depending on the package size and your CPU, the build could take anywhere between a few minutes to about an hour. Gentoo users might be used to it, but most others will be quite surprised.</p>
<p>Once the package is built, pacman will offer to install it neatly alongside your other system binaries. Enter your password and that's pretty much it - you have installed your first AUR package!</p>
<h2>Using helpers</h2>
<p>As seen in this rather simple guide, using the AUR is not at all a hard task once you know how it works behind the scenes. However, it's still not as convenient as having a package manager to search for install and update packages with short, simple commands. Could there be a way to use the AUR in a similar manner? Surprisingly, there are a few, called <strong>AUR helpers</strong>.</p>
<p>A helper program takes the same (or similar) syntax as pacman and abstracts the work described above to provide an experience similar to using pacman itself. There are still a few caveats that the helpers can't cover completely, and the official way to install from the AUR according to the Arch Linux Project remains the manual process described above. But if you take these in consideration, it's still a very convenient way to use the AUR.</p>
<p>For the longest time, a program called <code>yaourt</code> was the go-to helper in Arch Linux. It was, however, deprecated due to lack of maintenance sometime in the mid 2010s, and a spiritual successor called <code>yay</code> became the next recommended helper. However, <code>yay</code> itself became deprecated by the end of 2019 due to lack of maintenance, and a few other alternatives came along (I hear that <code>paru</code> is the new successor to yay). </p>
<p>This anecdote illustrates one important limitation of helpers: <em>they are as useful as they are maintained.</em> Because they are not an official part of Arch, the project does not maintain them, and they rely on volunteers to keep evolving together with Arch itself. Using a slightly old helper might not be a problem in the short term, but it might stop working as it falls behind the rolling releases. Keep this in mind as you choose which helper to use.</p>
<p>As they're not part of of Arch Linux, helpers must (not ironically) also be built and installed from the AUR. As we saw before, though, that is not a problem: just follow the same process to build the package manually with git and once you're done, the helper will be available to you, in parallel to pacman.</p>
<p>A full listing <a href="https://wiki.archlinux.org/index.php/AUR_helpers">comparing all AUR helper programs</a> is available at the Arch Wiki.</p>
<h2>Conclusion</h2>
<p>The Arch User Repository is a real treasure trove of community-maintained software that is usually very up-to-date with upstream releases, but requires a slightly different process to install from. Thankfully, it's not complicated, but might require a little more time since it's always built from source.</p>
<p>AUR is not without its warts either, and less popular packages might fall behind in terms of quality or even security, and you should always keep this in mind as you install software from it. You can use helper programs (yay, paru, etc) to ease and speed up the build process when installing from the AUR, but keep in mind their limitations and that the official way to install is still manually. </p>
<p>If you reckon all of these, the AUR is a terrific tool and a source of a myriad of packages, some of which are not even available in other distributions.</p>
<hr />
<p>How do you think the AUR compares to other distributions external repositories? Do you have a preferred helper program that you use? Which one? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This is post #2 on my <a href="https://100daystooffload.com/">#100DaysToOffLoad</a> challenge.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>What distro do I recommend for the Raspberry Pi in 2022?</title>
        <link href="https://tilde.town/~kzimmermann/articles/best_distros_raspberrypi.html" />
        <updated>2022-08-30T15:36:43.217997Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>What distro do I recommend for the Raspberry Pi in 2022?</h1>
<p>I'll just start off by saying that this sort of title for an essay is indeed reeking of clickbait, so I'm sort of sorry for choosing one like this. Also, "listicles" for this sort of thing pop up anywhere in the Linux community with their "Top 10 Distros of 20XX" or "Top 5 distros for XYZ," and I really don't mean to be the judge of something that ultimately is down to somebody's personal preference. </p>
<p>So why am I writing this article, you ask? Well, after about 2 years of owning a Raspberry Pi, I'm doing this to record some findings. Hopefully the insights that I share here can be useful not only for a beginner looking for their first distro, but also developers when looking to port their project for the Pi.</p>
<p>Alright, enough of suspense. The Operating System that I personally recommend for the Raspberry Pi as of 2022 is...</p>
<p><strong>Debian Linux.</strong></p>
<p>This is the part where a lot of people start booing and hollering about the decision. Why, oh, why did you choose Debian? You obviously haven't tried distro X. Distro Y is just so much more polished. Etc.</p>
<p>Thankfully, this is also the part where I lay down my reasons to justify it.</p>
<h2>Why?</h2>
<p>For the longest of times, I would summarize my experience of Debian in a single sentence like this: <em>Debian is Linux minus the bullshit</em>. Profanity aside, the point was this: if you want to get up and running quickly with a Linux distribution, Debian is a great choice. You get a preinstalled desktop environment, a decent choice of software and the installer is very straightforward. It might not be as minimalistic and bottom up as Arch and its derivatives, but you get something very complete, and not as bloated and sometime <a href="https://www.eff.org/deeplinks/2012/10/privacy-ubuntu-1210-amazon-ads-and-data-leaks">borderline malicious</a> like Ubuntu. Thus, I chose to stay with Debian for most of my Linux experience.</p>
<p>As time passed, you might say that I "graduated" out of the comfort of Debian and into something a little more power user-like, building things a little closer to scratch with Arch, Artix and more recently, <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine</a>, but the basic principle of "Linux minus the bullshit" remained. Whenever I need to spin up a quick and dirty machine from zero (like those <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">from the trash</a>, for example), burning the Debian ISO and installing into the drive is a no-brainer for me.</p>
<p>And so, the Pi. Very well, if installing an OS in a PC is as straightforward as burning a disk image into flash media, then booting and installing from there, shouldn't the same apply for the Raspberry Pi? Here's where the answer differs - and sometimes a lot - depending on the distribution.</p>
<p>Debian correctly follows the trend, and all you need to do is <a href="https://raspi.debian.net/teted-images">download the right image</a> for your Pi, then burn to an SD card and boot it. You'll get to the root user prompt straight away and then, well... you can take it from there. It's minimalistic, sure, but everything you need is an <code>apt-get install</code> away. Wanna work on command-line only? <code>apt-get install tmux vim (...)</code>. Need a graphical environment? <code>apt-get install xorg &lt;your favorite wm&gt;</code>. Etc. Get creative.</p>
<p><em>The install process is quick and simple, so that you don't have to waste time there.</em></p>
<h2>But why not ...?</h2>
<p>Other apparently more power user-oriented distros put a large hurdle in the installation process alone. These include requirements to manually partition the install medium, extract and do additional overlay configuration before even getting the OS to boot!</p>
<p><a href="https://archlinuxarm.org/platforms/armv8/broadcom/raspberry-pi-4">Arch's Pi install process</a>, for example, is about as complex as the classic manually-chrooted arch install - and that's only to prep the medium to boot. Caveats exist when you're booting from the USB instead of SD card, or when ARMv8 of Raspberry Pi 4 is the platform instead of ARMv7. Even the controversial Manjaro, which was supposed to be easy with a burnable image file has presented some problems to me.</p>
<p>Alpine, which I love very much for desktop use, is also shamefully complex in its <a href="https://wiki.alpinelinux.org/wiki/Raspberry_Pi">medium prepping / partitioning requirements</a> (note how the wiki references several other guides in its instructions), bitterly in contrast to the joy and simplicity of its <code>setup-alpine</code> installer command on the desktop. Oh, and <a href="https://wiki.alpinelinux.org/wiki/Classic_install_or_sys_mode_on_Raspberry_Pi">sysmoding</a> it is a whole new can of worms, good luck with that.</p>
<p>On the other hand, the noob starter line of the defaulted Raspbian or Ubuntu aren't distros that I can recommend completely. Though it is straightforward to download and install, Rasbpian did a cunning move last year or so where it <a href="https://arstechnica.com/gadgets/2021/02/raspberry-pi-os-added-a-microsoft-repo-no-its-not-an-evil-secret/">silently sneaked in Microsoft repo URLs</a> into the repos that would persist with updates even if users opted out. Too much of "we know what's better for you" to me. Raspbian also presents a weird version of the applications available for it (ex: pcmanfm is watered-down and has no tab support), and official versions are not available from the repos.</p>
<p>I understand that the Raspberry Pi doesn't boot or work exactly like a traditional computer, architectural differences and all, but the truth is that if some OSes can make their distributions download-burn-boot easy, why can't the others? At least there should be a ready-made image in the form of a standard install, and the power users could get their hands dirty if they want to. I need to spend time configuring my system, not getting it to boot.</p>
<h2>Conclusion</h2>
<p>By offering the best balance between minimalism and the ease of use and installation, Debian Linux takes the cake as the distro I would recommend on the Raspberry Pi in 2022. I have not and do not claim to have tested every distribution available for the Pi, and there are indeed other distros that I consider fair runner-ups for this platform:</p>
<ul>
<li><a href="https://download.freebsd.org/releases/arm64/aarch64/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-arm64-aarch64-RPI.img.xz">FreeBSD</a> is rock solid on the Pi and also straightforward to install (just extract the .img file). Downsides include not supporting some of the Pi's hardware - notoriously no WiFi or sound support. Power management (a little <a href="ihttps://tilde.town/~kzimmermann/articles/30_days_on_a_pi.html">ballsy under Linux</a>) is straightforward (add <code>powerd_enable="YES"</code> to <code>/etc/rc.conf</code>).</li>
<li><a href="https://tilde.town/~kzimmermann/articles/rediscovering_puppy_linux_raspup.html">Puppy Linux</a>, rebranded as the <a href="http://raspup.eezy.xyz/">Raspup project</a> for the Pi is also pretty neat for those with limited storage resources or older versions of the Pi. It follow the same user-friendly UX principles of mainline Puppy and all hardware is detected accordingly. Downsides include the fact it's sort of outdated (based still on Debian 10) and the Puppy model (root by default, session not persisted unless configured, etc) can raise a brow or two.</li>
<li>Ubuntu is not as bloated as what the traditional desktop version usually is, most likely because it's advertised as a "server edition." This in no way prevents you from installing things like a DE if you like, but if you're going to go with a CLI-only start, why not just go for Debian instead?</li>
</ul>
<p>In some way, I might be just "noobing" in the sense that I couldn't get some OSes to boot from my own inexperience, so I will keep an eye out to try them out again in the future. Also, there might be other brilliant OSes that I just didn't get to try out it. So this post isn't definitive, and next year and on I might re-do it. One thing that is definitely in my mind is <a href="https://www.openbsd.org/arm64.html">OpenBSD</a>, which I plan to try even in the PC, so there's that.</p>
<hr />
<p>Do you have a Raspberry Pi that you use regularly? What Operating system do you recommend for it? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #36 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Bring Back Blogs! And a sprinkling out a few New Year's Resolutions out there</title>
        <link href="https://tilde.town/~kzimmermann/articles/bringbackblogs-new-years-res.html" />
        <updated>2023-01-10T22:26:10.504515Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Bring Back Blogs! And a sprinkling out a few New Year's Resolutions out there</h1>
<p>A late Happy New Year of 2023 for every one of my readers and subscribers of my RSS feed! </p>
<p>I know this might be a little late, but I signed up for a challenge online intended to bring back the art of blogging into the Internet - taking away power from the big social networks and bringing back that decentralization that empowers anyone to have a presence and a home online. That's what the <a href="https://bringback.blog/">Bring Back Blogs!</a> movement is proposing, and what better kick-in-the-butt to get back to writing is there?</p>
<p>So this is my <strong>comeback to blogging:</strong> starting 2023 in a big blogging way this month. The initial conditions for participation are: post at least three times in the month of January, and to distribute it via RSS. Everything else is optional, including the additional sharing via social networks, but I'm adding myself a personal third conditional: posts will have to be <em>at least 1000 words long</em>. Which shouldn't be much of a problem, frankly, due to my relatively verbose <a href="https://tilde.town/~kzimmermann/articles/">posting history</a> here.</p>
<p>So what to write beyond a rather "meta" introduction? Well, how about some late tech-centric new years' resolutions? I'd say it is still in time for those. So without further ado, some of my goals and ambitions in the tech side of things for 2023:</p>
<h2>Reduce phone dependency</h2>
<p>We are quick to point to smokers or drug users and claim that they're doing something that it's bad for their health and they shouldn't be doing in first place. And then we turn around and pick up our surveillance-laden smartphones. If you do this, I'm sorry, but you have no moral high ground to point around - and this includes me.</p>
<p>Putting my money where my mouth is, I will start reducing phone dependency in a perhaps slow, but definite manner. Starting this month, I will force myself to reduce my phone usage in several ways, either preventing altogether or making it harder to use it. Some strategies include:</p>
<ul>
<li>Putting in place communications platforms with my family that I can access from my work PC (could be as easy as using a web-based IRC client)</li>
<li>Establish one or two phone-free days a week in which my phone will be turned off or left home (for example, when all I'm going to do is commute to the office and back or work remotely from home all day).</li>
</ul>
<p>In principle, anything that I can do in my phone is doable via my computers (work or personal), but there is one catch that's constantlyincreasing: with many platforms moving to using MFA exclusively via phone apps (ahem, my bank), depending on my business on the day it, ditching it completely won't be possible. Still, I could move such business to my phone allowed days.</p>
<h2>Open more ports in my home server</h2>
<p>After <a href="https://tilde.town/~kzimmermann/updates/20221001_1403.html">self-hosting</a> this very blog out of Tilde.town, there are resources to spare in my server (a Raspberry Pi!) that I could probably put to use on something. Next question is: what shall it be?</p>
<p>While I don't have concrete answers for the moment, here's the end-result of the resolution: I want more than just ports 80 and 443 open and forwarded at the end of the year. That is: more services from that machine should be made available over the internet on the course of this year.</p>
<p>Things that I've thought so far were to host a tiny IRC server to, among other things, facilitate phoneless campaign described before: if I could set a private IRC server that only my family can access, I wouldn't even need to enable end-to-end encryption to keep our chats safe. And perhaps they could like it so much that they'd <a href="https://tilde.town/~kzimmermann/articles/whatsapp_sucks.html">ditch WhatsApp</a> altogether! </p>
<p>Hey, one can dream, right?</p>
<h2>Blog more frequently, and shorter</h2>
<p>Perhaps the largest problem with my blogging campaign is that by attempting to make quality posts with interesting content, I end up writing way too much. Like this very post, which was supposed to be a short description of my resolutions! This often makes me tired, because I lose interest in the course of writing long essays, and they basically live unfinished in my collection afterwards.</p>
<p>Solution: write less, but more often. If I can push out more essays (still of a decent length to qualify for that definition) but shorter ones, I think I can keep up the pace more easily, and I won't tire away of writing. This could work to keep my feed interesting, if my readers miss frequent updates.</p>
<p>I could also try to substitute my toots in mastodon with links to the <code>/updates</code> quick posts like <a href="https://tilde.town/~kzimmermann/updates/20210331_0708.html">I used to do back in 2021</a>.</p>
<h2>Game less, record more</h2>
<p>I love free software gaming, but gaming itself can also turn into a destructive addiction - and I found myself quite often mindlessly wasting time with it until late in the evenings in the last few weeks. Likewise with smoking, I don't want to engage into a destructive habit under the guise that it's not as bad as the others, and so mindless gaming will have to go.</p>
<p>That is not to say <em>all</em> gaming, though: I can find at least one outlet for which gaming can be justified as acceptable, and maybe even productive! That outlet is making videos for <a href="https://diode.zone/c/kzimmermann_podcast/">my Peertube channel</a>. Now I know, there are probably other more constructive things to review, like alternative browsers, file managers, <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">command-line utilities</a> or even whole new distros. That's true, but for now I'll say this: it'll be done when it'll be done.</p>
<p>As a practical thing that I can make into a resolution, I'll trade in mindless gaming time for focused, content-rich video production time - with gaming as the backdrop. Because hey, in the end what better way is there to pass a serious, privacy or security-related message than wrapping it around a nice showcasing of Free Software gameplay?</p>
<hr />
<p>So there it is, the first post of 2023, and hopefully one that will get me back on my blogging habits! What are your tech-side resolutions for this year? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<p>Happy 2023 again everyone, let's make this a great one!</p>
<hr />
<p>This post is number #40 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>"We caught you 'Pirating' Linux!" is more than a bad joke.</title>
        <link href="https://tilde.town/~kzimmermann/articles/caught_pirating_linux.html" />
        <updated>2021-05-26T08:15:29.254110Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>"We caught you 'Pirating' Linux!" is more than a bad joke.</h1>
<p>Yup, you read that right. It looks like a Comcast subscriber from Reddit has <a href="https://teddit.net/r/linux/comments/nkztyv/copyright_notice_from_isp_for_pirating_linux_is/">received a warning from his ISP</a> for doing what's perhaps the impossible: <em>pirating Ubuntu 20.04</em>.</p>
<p><img alt="DMCA copyright infringement notice from Comcast submitted to a user who torrented Ubuntu 20.04" src="https://teddit.net/pics/w:null_jzf5jegdyb171.png" /></p>
<p>There are so many things wrong with this situation that it's almost hilarious - except that for the dude the consequences are very real and not a laughing matter at all. Let's see:</p>
<ul>
<li>How the hell is Ubuntu Linux a work to which the copyright "owner" (note the singular!) does not <a href="https://ubuntu.com/licensing">authorize copying and redistribution?</a></li>
<li>Who is this "copyright owner" (again, singular!) who supposedly "flagged" this unauthorized sharing? Because unless it was actually Canonical Inc (and even then, with respect to what? Trademarked assets like logos?), the statement is simply false, and perhaps even an illegal act of impersonating someone else!</li>
<li>Has Comcast made it specifically against their ToS to use the BitTorrent protocol? Because if not, then claiming that BitTorrent itself is illegal is as legitimate as claiming that HTTP is indecent because it allows people to stream porn - i.e. not a legitimate reason at all. </li>
</ul>
<figure>
    <img src="/~kzimmermann/images/canonical_license_ubuntu.png" alt="a screenshot of canonical's License of Ubuntu, showing clearly that you can redistribute it." />
    <figcaption><a href="https://ubuntu.com/licensing">Ubuntu's own license</a> states that all the software composing it must be redistributable without royalty payment.</figcaption>
</figure>

<p>At this point, there are also some counterpoints that need to be weighed in: for example, some components in Ubuntu have a not-so-clear licensing terms, and Canonical itself holds trademark ownership on some things like the Ubuntu name and Logo, but I would be <em>extremely</em> surprised and disgusted if they themselves were the ones to raise this ludicrous objection.</p>
<p>A larger point of concern in this case is whether the ISP is banning the use of the <em>BitTorrent protocol</em> itself. That would qualify as some sort of censorship in my book, and would be the root for a much larger cause of concern for projects that have limited hosting and bandwidth availability, and rely on peer-to-peer file sharing to make their work freely available to end users - like lots of Linux distros. And given that Comcast was one of the cheerleaders for the end of Net Neutrality in the US, this would not exactly surprise me.</p>
<p>Luckily, a lot of people already apparently reached out to the guy and offered some suggestions of whom to look further for legal counseling. I hope he can mend the situation and let his ISP know that he knows his rights alongside the EFF and other organs that fight for user freedom.</p>
<p>And as for OpSec Security (presumably the ones who detected the "infringement"), good job I guess? The world really must be a safer and better place with you spending time and effort snooping and harassing people who just want to share files in the Internet. And hey, if not the world, at least the tycoon clients who pay pieces of shit like you are better off, right?</p>
<hr />
<p>What are other bizarre ISP bullshit stories that you've heard or experienced yourself? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<p>Further reading:</p>
<ul>
<li><a href="https://teddit.net/r/linux/comments/nkztyv/copyright_notice_from_isp_for_pirating_linux_is/">Link for the Bizarre thread in Reddit</a></li>
<li>This has also spawned a nice <a href="https://fosstodon.org/@kzimmermann/106298695527529114">discussion thread</a> on Mastodon worth looking into.</li>
</ul>
<hr />
<p>This post is number #17 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Removing unneeded runtimes and dependencies from Flatpak</title>
        <link href="https://tilde.town/~kzimmermann/articles/deleting_unused_flatpak_runtimes.html" />
        <updated>2022-01-26T22:51:48.264883Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Removing unneeded runtimes and dependencies from Flatpak</h1>
<p>Throughout last year I was in a Libre gaming spree where I tried multiple FPSes and other games for Linux, resulting in many-a-different videos in my <a href="https://diode.zone/c/kzimmermann_podcast">Peertube Channel</a>. Lotsa fun was had, though I realize that I suck in some kinds of games, and learning to record a podcast was nice.</p>
<p>Since not every game I wished to play was available in Arch or the <a href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html">AUR</a>, I used <a href="https://flathub.org">Flatpak</a> to standardize most of the installs, and not end up with a bunch of different and potentially incompatible dependencies around. This was also much faster than manually compiling them from source, which is how some of less frequently updated applications like games are provided via the AUR.</p>
<p>There was one significant byproduct from this venture, though: significant <em>usage of disk space</em>. Where Flatpak was efficient in organizing these dependencies and sandboxing them, it did have the side effect of increasing disk usage as entire runtimes had to be installed at times instead of the smaller native shared libraries of my package manager. </p>
<p>It was of course quite subtle in the beginning, but as I played more games, I couldn't help but noticing that <code>df</code> started to report an increasingly smaller available disk space. Even <a href="https://tilde.town/~kzimmermann/articles/project_128.html">Project 128</a> efforts did not seem to help much. And when I gauged the size of such runtimes, it became clear where disk space was going to.</p>
<p>Flatpak has useful and quite straightforward command-line arguments to manage your installations, but here's the thing: when you uninstall a flatpak app, it doesn't automatically uninstall dependencies or runtimes, even when it's clear that there are no apps that require them in the current system. That was why while I went ahead to uninstall the games I no longer play, the actual freed space did not seem to change significantly.</p>
<p>So, do I manually hunt down unused runtimes and purge them at will, at risk of damaging something if I mess up? Thankfully, no - there is a way of deleting them automatically. Here's it:</p>
<pre><code>flatpak uninstall --unused
</code></pre>
<p>Running that after you explicitly <code>uninstall</code> something will purge any left dependencies that have no other use in the system. Note that some of these may include things like entire GNOME or KDE environment runtimes, which easily consume upwards to a few GB in disk size. This way, you can more or less roll back the flatpak environment back to what it was before you installed whatever program you wanted to try - and freed away a significant part of my hard drive.</p>
<p>This maneuver completed the circle of using flatpak for me. It made the whole system a much simpler, saner and quicker way to manage applications that for some reason aren't very well integrated into the package manager of your distribution. No more fear of trying out new programs and no headache to uninstall them!</p>
<p>Perhaps there are better ways to manage specific programs besides the main package manager in Linux (I hear that <a href="https://guix.gnu.org">Guix</a>, <a href="https://nixos.org/">Nix</a>, or AppImage are pretty good for this too). For now, however, I'm sticking with Flatpak and its repositories. Carry on!</p>
<hr />
<p>How is your experience with Flatpak or other secondary package managers alongside your distro's main one? Do you have any tips about using them? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<p>Source for this post: <a href="https://www.linuxuprising.com/2019/02/how-to-remove-unused-flatpak-runtimes.html">Linux Uprising</a></p>
<hr />
<p>This post is number #31 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Is Digital Minimalism a thing? Or: why bloatware today means files</title>
        <link href="https://tilde.town/~kzimmermann/articles/digital_minimalism.html" />
        <updated>2020-10-24T10:49:38.523707Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Is Digital Minimalism a thing? Or: why bloatware today means files</h1>
<p>Bloatware is nothing new, neither a recent trend. It does, however, become more obvious as as we step forward in time and the programs we previously thought were kind of too much now look tiny in comparison to their current bloated versions.</p>
<p>Oh yes, we all know bloatware. Most of us came to Linux after spending at least a good decade of our lives dealing with a famous series of them: Microsoft Windows. But recently, I've been concerned with a different type of bloatware, one that does not receive nearly as much the concerns that it should: <em>bloated files</em>.</p>
<p>Remember when Apple launched the first iPod in 2001, with the sales pitch that it contained "enough space to hold a thousand songs?" Guess what: that ludicrous amount of space <a href="https://en.wikipedia.org/wiki/IPod#History">was only about 5GB</a>. Chances are that the device you're using to read this post today could fit all that data in its <em>RAM</em> alone. Still, there was a special kind of magic to it that allowed so much music to be storable there even with the laughable space by today's standards. That magic is the <em>small size of the files</em>.</p>
<p>In computing, size matters and, unlike what some real-life standards might say, <strong>smaller is always more desirable.</strong> MP3s just mushroomed into popularity and survived the oblivion of <code>wav</code> because they were smaller and thus easier to share without losing any quality. The whole concept of "piracy" (or whatever corporate media is calling it today) worked quite well along because the files used were small enough and portable, easy to share even among crowded comm lines.</p>
<p>Yet, despite "piracy" still booming, I wrote this last paragraph in the past tense. Why?</p>
<p>Because I believe that the era of small and reasonably-sized files is past - and soon going to be extinct.</p>
<p>A quick look around The Pirate Bay today shows that movies do not come anymore in sizes less than 1GB each, because of course we all need 1080p+ resolution to watch a ripped video in our freaking <em>smartphone</em> screens. "Oh, it's quality," they say, as if it would make a difference. You're not on a private movie theater, dude. You're gonna either watch this on your 30" monitor or your 6" phone screen. Small resolution is <em>absolutely fine</em>.</p>
<p>Same thing applies for pictures these days. Smartphones capture HD photos easily over 3000px wide. What for, except for maybe aiding surveillance agencies to spot stuff in things people share? Or maybe make telcos and hard drive manufacturers happy that we need more and more of their stuff. A picture that you're going to only see for a few seconds on a phone screen does not have to be much bigger than your phone's screen! HEIC and WEBP are also another issue: why reinvent something that wasn't broken in first place, and make it larger in the process?</p>
<p>I think the only thing that did not suffer from this "inflation" is music. MP3s and OGGs are still relatively unchanged from the old days of Napster, and here's me hoping they stay like that. </p>
<p>Call me old-fashioned, whatever, but people "needing" more space for data that wasn't supposed to be that big in the first place is ludicrous. I've recently embarked on a little quest to downsize every image and video that I own. It's easy to do it in batches, using the ImageMagick package for resizing images in several ways, and the swiss-army knife of ffmpeg to downsize videos. By setting up a custom <a href="dontlikeitcreateit.html">shell script</a> for example, it's possible to automate the process and let it run in the background while you do something else.</p>
<p>I myself make a point to halve the width of every photo that I've taken with my phone, and to reduce every HD video that I have to less than 720p. No need - whatsoever - for anything larger than that, and as a result I save up to 75% of all space required to store this data. That's one of the reasons why I've pretty much never needed anything more than 300GB to back up <em>all</em> my data.</p>
<p>The economy inflates, prices inflate, things get more expensive and software gets bloated over time because "features." Files, however, don't have to go the same way. You don't have to play the bloat game here, and I encourage you not to.</p>
<p>What do you think about bloatware coming to files: am I overreacting or file sizes have really ballooned over the decade?</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>A tale of Distributed Social Networks</title>
        <link href="https://tilde.town/~kzimmermann/articles/distributed_social_networks.html" />
        <updated>2020-09-26T03:33:56.465006Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>A tale of Distributed Social Networks</h1>
<p>Come, children, sit by the fire. Uncle Klaus is about to tell a story firsthand as well as he can remember about how distributed social networks came to be.</p>
<h2>The promise</h2>
<p>The year was 2014. Having been using Linux and Free Software for a couple of years already by that point, I came across an interesting project with a funny name to it: <a href="https://gnu.io/social/">GNUSocial</a>. </p>
<p>As funny the name was (why does so much Free Software-related stuff have to carry the GNU Projects name or initials into it, anyway? Just saying), its promise was very serious: switch around the feudalistic, centralized and data-silo-like aspects of social networks, and turn them into a democratic and distributed pattern. A distributed social network with no single node, domain or administrator, and most importantly: that <em>anyone</em> can host. My eyes brightened, and I was immediately interested.</p>
<p>Now, as a little bit of background, I was never really into the social media game permeated by the immediate society. I had had Facebook since the peer pressure of High School, but closed the account in 2012 due to lack of interest - surprisingly, before I even had a clue of the Snowden revelations - and never had a twitter account. Yet, this idea of connecting to other hackers and similar idea sharers without having to resort to the draconian policies of gatekeepers was a big appeal. GNUSocial it is, where do I sign up?</p>
<p>I created my first account on the network (<code>kzimmermann</code>) by choosing Quitter.se as my server (it was one of the first recommended) and got to explore this fine new world. The instances were not that many, and by today's standards the features were quite restricted (140c, everything else was an attachment), but the community was great, and I started learning so many things. Loadaverage, Quitter.{,no,is}, skilledtests.net, these were some of the popular instances of the day. Time was passing, and <code>kzimmermann</code> was becoming more popular, gaining followers and favorites, something I had never thought possible for me at the time.</p>
<p>Due to the character limitation and post options, my eyes also turned towards <a href="https://diasporafoundation.org/">Diaspora</a>, another distributed network that boldly seemed to go head-to-head against the giant Facebook. Multi-paragraph posting, markdown syntax, image embedding and full profiles? <a href="https://joindiaspora.com/people/c2ccfe70ffdf013285f6005056ba3b3d">Sign me up</a>, I wanna join the fight. My interest in those networks only grew and grew, just like my own advocacy for Free Software, speech and privacy.</p>
<h2>The migrant crisis of 2015</h2>
<p>Sometime around February 2015, an interesting incident happened: I woke up to find out that overnight about 50 people had started following <code>kzimmermann</code>. As popular as I was getting, this still was an absurd spark in the following. And this was not just me - it seemed that the distributed "fediverse" had received an enormous influx of new users, much like a migration. And a migration it was: many disgruntled users tired of Twitter's censorship and policies decided to abandon it over a freer option - and GNUSocial fit their needs just about perfectly. </p>
<p>Initially, it seemed that most of these users, just like me, made themselves home at some of the popular hosts available at the time (Quitter was popular mostly due to being the first one listed there). However, there was also a background reason why most of these users were disgruntled with Twitter blocking them. Most of them were censored or limited because their content was considered offensive to the audience there, and turns out that the people in these other instances weren't exactly more considering of their posts either.</p>
<p>Soon, the legacy instances realized it was time to do a small reform on content policy. Acceptable use policies were published. Prohibited content was described. Bans started to happen in the network that once was heralded as a champion in freedom of speech. Quitter.se, now bigger than ever, famously changed its homepage to clarify things, stating that "Quitter.se is not a service, and you are not a client," and that "if you don't like the direction this instance is going, you are free to move to another instance or start your own."</p>
<p>People got the message. Thanks to the flexibility and power of the network software, soon many new instances started appearing, giving more resilience to the federation, and also stating out their own terms of service and rules, which could be anything that combatted their previous dissatisfaction. Shitposter.club, sealion.club, freezepeach.xyz, and many other instances where people would be free to post anything were popping out, with people blocking or following others in a small content-vs-freedom tug of war. The biggest winner, however, was the network as a whole, since many new nodes were created at the time.</p>
<p><code>kzimmermann</code> pushed on posting, and grew to a couple hundred followers in that time.</p>
<h2>The second immigrant crisis, internal issues and new saviors</h2>
<p>Almost on cue, 2016 brought in another Twitter-related migrant crisis to the network. As <code>#RIPTwitter</code> trended, many users again gave up on Twitter due to problems with content and censorship policy, again with GNUsocial being the obvious alternative at the time.</p>
<p>This time, however, the federation started to get in trouble: too many people signing up for the way overloaded Quitter.se, and not many new instances being created or populated to balance out the load. Many outages in the Quitters meant that the entire network seemed to suffer for many new users, who started questioning its actual resilience. Hannes, admin of Quitter.se, shut down the new user registration feature in protest, trying to educate new users on what a federation really means and how they can participate via other instances or even host their own.</p>
<p>In parallel, other people were growing dissatisfied with even the way that ithe GNUSocial software worked. The project was written in PHP, lacked updated documentation and installing it was complicated. Would it be time to change things completely, perhaps rewriting it from scratch? A developer by the handle of <a href="https://mastodon.social/@gargron">Gargron</a> took up to the challenge, and silently started writing his own platform to join in the federation: <a href="https://joinmastodon.org/">Mastodon</a> was born.</p>
<p>Initially a lonely node across the federation, Mastodon enjoyed a spectacular growth over the remainder of that year and the next ones. And it wasn't for less: modern looks, modern social network features and - perhaps the most controversial - many fine controls for moderating, filtering, blocking and stating terms of service were easily available. Mastodon became the flag-bearer of the Social Justice movement that swept the internet at the time, and probably remains today.</p>
<p>The "shitposting" audience and other wildcard members also reacted: another GNUSocial revamping by the name of <a href="https://pleroma.social/">Pleroma</a> emerged later. Boasting complete freedom of speech plus being lightweight enough to run in a Raspberry PI, Pleroma nowadays is the latter half of the federation.</p>
<p>And as for Quitter.se, it finally went under sometime in 2017. <code>kzimmermann</code> together with a couple dozen thousand was no more, and though I did try other instances like gnusocial.club, basically I gave up on microblogging there. It was no longer fun anymore after losing 400 or something followers and having to start from scratch again.</p>
<h2>The future, or how can we do it better?</h2>
<p>There are many big lessons to be learned out of this story, but perhaps the most important one is about what <em>decentralization</em> means, and <em>how it can be guaranteed</em>.</p>
<p>Quitter.se no doubt played a huge part in making the fediverse what it is today, but its immense popularity also ended up harming the concept of a federation: instead of many instances sharing content and promoting a healthy network, it became more of a few beehive-like supernodes that controled most of the available content, in a model that can easily turn around on user freedom and privacy and freedom of speech.</p>
<p>Mastodon and pleroma answered that call making it easier for instances to be created and developed, strengthening the network, but I believe that true decentralization has to go even further: I'm talking <em>peer-to-peer</em>. Projects such as <a href="https://zeronet.io/">ZeroNet</a> force a user to also become a host, and that way ensures that content stays decentralized forever, in a truly democractic and full freedom of speech manner. And yes, there is even <a href="https://github.com/HelloZeroNet/ZeroMe">a social network</a> built for it.</p>
<p>When each user is in charge of their own content and what to show, hide and share, that's when true freedom has been achieved. I believe this is the direction we should all be heading to, empowering users.</p>
<p>And as for ol' <code>kzimmermann</code>, it has (somewhat) returned: check me out <a href="https://awoo.pub/@kzimmermann">here</a> on Mastodon.</p>
<hr />
<h2>Epilogue: Decentralized networks, platforms and resources</h2>
<p>Some of my favorite platforms and software, listed in no specific order:</p>
<ul>
<li><a href="https://zeronet.io">ZeroNet</a>: full P2P platform for building decentralized applications a-la bittorrent</li>
<li><a href="https://ipfs.io">IPFS</a>: file hosting and static web page P2P platform, aiming to replace HTTP</li>
<li><a href="https://tox.chat/">Tox</a>: full P2P IM, video and audio chat platform. A complete stack, analogous to skype in usability, with many clients for different platforms.</li>
<li><a href="https://ricochet.im/">Ricochet</a>: another P2P chat application. Uses Tor to eliminate metadata</li>
<li><a href="https://geti2p.net">I2P</a>: anonymous network similar to Tor (darknet only), relies completely on peers to perform all work. You can easily host a site or run services through there.</li>
</ul>
            </div>
        </content>
    </entry>

    <entry>
        <title>Don't like it? Make it or change it! The Power of Free Software</title>
        <link href="https://tilde.town/~kzimmermann/articles/dontlikeitcreateit.html" />
        <updated>2020-09-21T09:40:23.902925Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Don't like it? Make it or change it! The Power of Free Software</h1>
<p>There is a very misunderstood concept in the Free Software community that the community isn't open or welcoming, and that sometimes it might be downright brash and assholish to beginners or <code>noobs</code>. Ask a question in the forum about <code>command</code>, <code>man command</code> or <code>rtfm</code> is your answer. Ask about why your system isn't working, people are silent or ignore it in IRC.</p>
<p>Perhaps there is some truth in this kind of statement, but I believe it's born out of a greater misunderstanding about Free Software in General. For most part, using Free Software comes with a small requirement of personal involvement not usually found in the circles of non-free software like Windows. And that, in my decade-long experience with Linux, has been one central concept: <em>learning how to do things yourself</em>.</p>
<p>As harsh as this might sound, learning how to do things yourself via Free Software is actually one of its most powerful features. While it does not exactly make everything easy at first, it gives you flexibility and an opportunity to make things work <em>exactly</em> the way you need them. As guaranteed by <a href="https://www.gnu.org/philosophy/free-sw.html">GNU Free software freedoms</a> 0 and 1, Free Software may allow you to use and modify it to best fit your use cases - and more often than not, other people's too.</p>
<p>That is not to say that free software as it is is not flexible enough - if you have a shell and enough command-line programs there to work with, you can automate and do many more things than you already know. I think nothing portrays this better than my experience with converting audio and video in Linux some years ago.</p>
<p>It was 2011 and googling around the Ubuntu Forums showed me a neat little program called <a href="https://winff.org/html_new/">WinFF</a> that could do batch-conversions of media files in a graphical way. However, after a while I found out that I needed to do a little more advanced conversions than WinFF at the time could handle, and was out of luck searching for a new program that could do exactly what I wanted.</p>
<p>After searching, however, my attention got back to WinFF: turns out that it is only a graphical interface for another command-line program called <code>ffmpeg</code>. Having had a little experience in scripting by then, I realized I didn't need another program to do something for me. I could simply run something like this:</p>
<pre><code>for item in *.avi
do
    ffmpeg OPTIONS -i "$item" "${item/avi/mp4}" 
    # or whatever other format you want.
done
</code></pre>
<p>And that would be it.</p>
<p>And just like that, knowledge applied correctly becomes power. Five lines of bash scripting, one awesome media program and a few minutes of work to do everything I needed. I could roll this off as a script that automates the process, and add a few more parts to make it a flexible program if I need. And likewise, anyone who needs something a little different can adapt it. </p>
<p>That, in my humble opinion, is one of the true powers of Free Software.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Buy DRM or just pirate?</title>
        <link href="https://tilde.town/~kzimmermann/articles/drm_or_piracy.html" />
        <updated>2020-09-21T12:17:00.593058Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Buy DRM or just pirate?</h1>
<p><a href="https://xkcd.com/488/">xkcd #488</a> answers it best:</p>
<p><img alt="Piracy or DRM? An infographic by xkcd" src="https://imgs.xkcd.com/comics/steal_this_comic.png" /></p>
<p>There is nothing wrong with making money, or operating a business. But when doing so <em>costs your individual freedom</em>, we have a problem.</p>
<p>There are lots of reasons why DRM is insecure, and bad for the internet and society in general, but the most pernicious part is this: <em>it doesn't work</em>. I like this <a href="https://techliberation.com/2007/10/10/why-drm-doesnt-work/">explanation by Cory Doctorow</a>:</p>
<blockquote>
<p>Say I sell you an encrypted DVD: the encryption on the DVD is supposed to stop you (the DVD’s owner) from copying it. In order to do that, it tries to stop you from decrypting the DVD.</p>
<p>Except it has to let you decrypt the DVD some of the time. If you can’t decrypt the DVD, you can’t watch it. If you can’t watch it, you won’t buy it. So your DVD player is entrusted with the keys necessary to decrypt the DVD, and the film’s creator must trust that your DVD player is so well-designed that no one will ever be able to work out the key.</p>
</blockquote>
<p>How do you expect someone to not figure out your secret key that protects your content when you <strong>must</strong> give this key (somewhat obfuscated) to them in order to allow them to view it? You want to keep pirates out and serve your customers? Guess what: your customers <em>are</em> the ones pirating it out!</p>
<p>There are those who think that this is not a big deal because it only deals with something non-critical like entertainment. This would be a fair argument back in 2005 or something. In the mid-2010s, however, the greed permeated by DRM grew considerably, to the point of crossing out of the digital into real life. </p>
<p>Farmers who purchased new John Deere Tractors <a href="https://www.vice.com/en_us/article/kzp7ny/tractor-hacking-right-to-repair">cannot use them fully unless they pay a service fee to unlock them</a>. Textbooks required by College classes have found themselves online, but locked behind an approved reading platform, which would never have happened with a physical book. Doesn't seem like such an inoffensive problem anymore, does it?</p>
<p>This brings the xkcd comic's last lines in more relevance than ever nowadays. It's not anymore a matter of entertainment or music - rather, DRM allows companies and overpowered individuals to creep further into your freedoms day by day. What are you going to do about it?</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Dumpster diving as a hacker: a consumerism-driven treasure trove</title>
        <link href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html" />
        <updated>2021-09-27T09:06:49.360368Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Dumpster diving as a hacker: a consumerism-driven treasure trove</h1>
<p>The term <a href="https://en.wikipedia.org/wiki/Dumpster_diving">dumpster diving</a> has been familiar to me in a while, but only as a term with no experience myself. It was funny to know that so much stuff was thrown away in good and sometimes perfect conditions, so that brave enough "divers" could salvage and make good use of them.</p>
<p>However, it was only in the very recent years that I truly discovered in firsthand what dumpster diving really was, and how it can be a fun, surprising adventure that also exposes the ugly side of society's overconsuming capitalistic desires. Suddenly, it wasn't anymore about <em>them</em> dumpster divers, but about myself as well.</p>
<p>Initially I had a little bit of prejudice against the whole thing: I wasn't in extreme poverty, and surely could afford to buy the things that I needed. Was resorting to the trash really the thing I needed after all? But all of this "taboo" was quick to melt away as I discovered an activity that was fun and sometimes very rewarding for relatively little effort involved. And it helped me to develop a "hacker's eye" for quickly spotting usable tech things among waste.</p>
<p>In this post, I'll share my insights I learned after starting to dumpster dive as a hacker, and some of the things I've found during my dives alongside some philosophy about modern consumerism. Follow along!</p>
<h2>A box full of surprises</h2>
<p>The first time I dumpster dove, I wasn't looking for anything about technology. Quite the opposite, actually, I wasn't looking for anything at all. I had moved into a new town for a temporary assignment, and wanted to furnish the rented house with as little stuff as possible since I knew it was temporary. Passing by a dump area on the way home, I noticed some guy just placing some chairs there, that looked perfectly usable. I asked him out of impulse if I could have them, and he said "sure, it's yours." And thus began my story in dumpster diving.</p>
<p>Along the next months, I started discovering other trash spots where people used to throw away more than just rest of food or wrappings, but actually functional stuff in good condition. Nicer neighborhoods tended to yield more interesting stuff, but a little less frequently. Others produced lots of stuff, but nothing I was interested in.</p>
<p>The rental house that originally was planned to be very succint and furnished with the basics only started to become more packed with furniture and stuff I recovered. Kettles, pans, kitchen utensils, chairs came in dandy handy (although most required a good deal of cleaning to become usable). One day, I popped my real cherry for dumpster diving: I started finding <strong>computer stuff</strong>.</p>
<p>First I found computer speakers; a USB-powered soundbar that worked perfectly to join in with my computer or to hook up to the TV and get a great sound to watch movies and everything. Then came a gamers-type keyboard. It was backlit in red, which made it usable in every situation and the keys were very smooth to type. Could it be that one day I could find a full-fledged computer? The answer once again was yes.</p>
<h2>How far does consumerism stretch?</h2>
<p>The first time I sighted a laptop left at an unsuspecting dump area in a sidewalk, I couldn't believe my eyes. "It can't be fully workable, who would discard a perfectly working computer like that? It's probably missing parts or damaged." Lo and behold, though, it came spotless, complete with the charger. But as nothing is perfect, the machine was actually quite old and with weak specs (something from the early Windows 7 era).</p>
<p>But even then, that laptop was enough for me to use it as a hacking machine, a platform to freely experiment and try new things like distrohopping. I tried Arch, Manjaro, AntiX and even PC-BSD with that laptop, and it paid itself multiple times over the effort of retrieving, cleaning and preparing it (and the $0 purchase cost). And it played very well with all of them, performing fast and well even with the "heavier" distros.</p>
<p>The question, then, was: why did the owner throw away a machine that was still in perfect working conditions? The only answer that I could think was: <a href="https://en.wikipedia.org/wiki/Planned_obsolescence">planned obsolescence</a> from <a href="https://searx.tuxcloud.net/search?q=planned%20obsolescence%20windows&amp;categories=general&amp;language=en-US">Microsoft Windows</a>. That's Microsoft saying "oops, your computer can't handle our increasingly bloated system! You have to buy a new one to keep up with us!" </p>
<p>The upside to this whole consumerism frenzy is that smart hackers (in the sense of "tinkerers," or people who can make things work) can trove in the amount of waste produced and make good use of these free things, even if it's not cutting edge technology. Even though finding a full-fledged laptop in the trash remains a rather rare occurrence, this treasure trove has rewarded me in uncountable other tech-related ways, some of which are presented below:</p>
<p><img alt="Dumpster diving findings" src="https://tilde.town/~kzimmermann/images/dumpster_diving_findings.jpg" /></p>
<p><img alt="Wireless router found on trash" src="https://tilde.town/~kzimmermann/images/dumpster_diving_router.jpg" /></p>
<p>Some of the many WiFi routers that I found in perfect working conditions. Some are old, supporting only up to 802.11n standard, but I've found reasonably newer models that have dual band and much more. Some are even supported by <a href="https://dd-wrt.org">DD-WRT</a>, a project that I intend to pursue later.</p>
<p><img alt="Wireless speaker from trash" src="https://tilde.town/~kzimmermann/images/dumpsterr_diving_speaker.jpg" /></p>
<p>Bluetooth soundbar speaker, one of the many soundbars I've found. This one has a slightly annoying Chinese voice narrating its status like "Pairing" or "Device Paired" but otherwise perfectly usable.</p>
<p><img alt="Bluetooth keyboard from trash" src="https://tilde.town/~kzimmermann/images/dumpster_diving_keyboard.jpg" /></p>
<p>A full-width, 105-key bluetooth wireless keyboard. This model is dongle-less, meaning you don't waste an USB port on the computer to use it, and you can even use it with your phone (termux, etc) very easily.</p>
<p><img alt="Lenovo G500 found on trash" src="https://tilde.town/~kzimmermann/images/dumpster_diving_lenovo_G500.jpg" /></p>
<p>Another full-fledged laptop I found. Lenovo G500 with 4GB RAM and (enough for Linux, probably unusable with Windows), Celeron chipset (ack!), probably one of their low-end cheapo line. It was discarded with a broken Windows 8 installation, now usable again with Artix Linux!</p>
<p><img alt="Wii U console from trash" src="https://tilde.town/~kzimmermann/images/dumpster_diving_wiiu.jpg" /></p>
<p>A Wii-U complete with handheld controller/display thing, motion bar and a controller. Though it's actually possible to <a href="https://www.reddit.com/r/linux/comments/caibny/here_is_debian_running_on_the_wii_u_not_vwii/">install Linux on it</a>, its resources are actually not that powerful, which is sort of a turnoff for me, a non-console gamer.</p>
<p><img alt="External HDD from trash" src="https://tilde.town/~kzimmermann/images/dumpster_diving_hdd.jpg" /></p>
<p>A 500GB external HDD, 3.5 inch size that I now use coupled with my Raspberry Pi as a NAS in the house. It came with a great collection of MP3s from 1995-2010 music that the owner generously left in it when discarding. There was also some weird Japanese porn left in it, which serves as a reminder to always encrypt your hard drives if you want privacy...</p>
<p>Not shown are the many other useful things that I found and complement my daily computing life, especially during the home-office season of the pandemic. Things like Monitors (I found enough of them to even bring to the office and just leave it there for when I go), keyboards, wireless mice and even a wireless headset. </p>
<p>The interesting thing is that, in the beginning of my dumpster diving life, my biggest feeling was a thrill about finding stuff for free. Nowadays, however, I have more of a questioning concern: why do these people keep replacing good things like that? How can they keep up with the tens of thousands of dollars (yes, I calculated it once) that get "replaced" every year? Even though I'm riding along on the good side of peer consumerism, I realize there's an ugly side as well, which kind of worries me.</p>
<h2>Tips for aspiring divers</h2>
<p>If you want to get started in dumpster diving, the first and biggest hurdle you need to get around is yourself: put aside your pride that trash is only for scavangers and needing people. Try it once, without any pressure or expectations, and see if you can find anything interesting. "Dive in" slowly and progressively - you don't need to literally get into the dumpster first thing if you never did this before. If you are careful enough and get some experience, searching trash is not at all gross and sometimes not even dirty.</p>
<p>I haven't dived in commercial dump areas yet (like around retail stores, supermarkets, factories and warehouses, etc), though I hear that these have a lot more stuff than residences. If you're limited to residential areas like me, look for areas where trash is massed together from many and not sparse. Single houses lining up along a street are a worse choice than a large apartment building or a complex of buildings all with a common waste area (even better). Learn to map and rank each dump area for quality and quantity of stuff: some are more prone to receive useless stuff like boxes and plastic containers, others more used stuff, which is always more interesting.</p>
<p>Finally, do get to know the law and the rules of the place where you're diving. Are there any restrictions? Anything that you cannot take once it's discarded? Is it considered trespassing if you walk up to a given trash area? You don't want to risk getting in trouble for just the thrill and a chance to get something for free.</p>
<p>There are some documentaries about dumpster diving all around YouTube if you look. These can be a good source of inspiration on the more ethical and anti-consumerist side of the practice; and also show some of the things that you can find through it.</p>
<p>To conclude, here's some first-person insight on me doing a quick trash area run (not really dumpster diving, but still) after a big holiday season. The amount of waste here generated from pure consumerism is mind-boggling.</p>
<hr />
<p>This post is number #3 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Are smartphones becoming more Faraday Cage-resistant?</title>
        <link href="https://tilde.town/~kzimmermann/articles/faraday_fail_smartphone.html" />
        <updated>2021-05-04T09:58:11.868932Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Are smartphones becoming more Faraday Cage-resistant?</h1>
<p>This morning, out of sheer curiosity I decided to do a quick experiment with my smartphone. For some reason, upon booting it wouldn't turn the wifi on, so I decided to "punish" the (<a href="https://tilde.town/~kzimmermann/articles/im_a_free_man.html">necessary</a>) evil machine a little bit by confining it to a makeshift <a href="https://en.wikipedia.org/wiki/Faraday_cage">Faraday Cage</a> - a metallic box of chocolates that we got from a friend as a present.</p>
<p>I've seen a lot of both documentaries and tin-foil hat movies where people successfully evade surveillance of tapped devices by putting them inside metallic containers like a Microwave Oven (<em>Citizenfour</em> by Laura Poitras), an empty bag of chips (<em>Enemy of the state</em>), or even the good ol' trick of wrapping it inside a heavy wet towel (Arnie in <em>Total Recall</em>). When nothing of these is available, a sheet of aluminum foil wrapping the device completely is enough to isolate it.</p>
<p>Or is it?</p>
<p>Turns out that the box of chocolates failed to contain the phone. I tested it by calling it from my partner's phone, and the damn thing would still ring. What the fuck? I tested the box and certified it was made of steel of sorts (unlike brass or aluminum, it's magnetic), but was still pretty pissed that the phone beat it. Ok, phone, you asked for it. Time to take it to the next level.</p>
<p>I opened the microwave oven and shut the phone inside of it. Side note: this is probably one of the scariest experiences of life, even when you know that you won't accidentally turn on the oven. Yeah, that will show that evil machine. After all, this is what Snowden did, right?</p>
<p>... And the phone would still ring. Note that this was both via voice and through a data app like Whatsapp.</p>
<p>I'm running out of options here. Last resort, wrapping it in plain old tin foil. C'mon, all these action movies with spies and all can't be wrong, right? But nope, the phone would still ring. What the hell.</p>
<p>What eventually worked was to layer up the defenses: when I wrapped the phone in foil <em>and</em> put it inside the metal box, it finally stopped receiving calls. Well, that took some effort, at least significantly more than what you see or read about in movies and spy fiction where a thin layer of aluminum is enough to deter the villain's tracking devices, leaving him blind about the hero's whereabouts.</p>
<p><img alt="the faraday cage, a pikachu-themed metallic box of chocolates" src="/~kzimmermann/images/faraday_cage.png" /></p>
<p>And that gets me thinking: how the hell can a cell phone survive this sort of isolation? Is this "stubborness" part of the "I'm going to be constantly connected" agenda or the "I'm going to spy on everything about you, no matter how hard" one? I previously did this Faraday Cage experiment about four years ago with a much older phone, and <em>that</em> one was successfully isolated with a layer of tin. Could be that cell phone makers - with their unremovable batteries, etc - are pushing for the point where a cell phone will never ever be disabled unless strict countermeasures are taken?</p>
<p>I guess these are all rhetorical questions to which I think I already know the answer. But anyway, if you own any of these tracking devices I'd recommend trying this experiment. We already know that this is not technology that's fighting for your side anyway, might as well take the time to figure out how much would it take to take the damn thing down when needed.</p>
<p>Have you ever tried isolating your phone with some sort of RF blocker? How much effort did it take for you to isolate yours? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post is number #15 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Throwback to when I first started using Linux</title>
        <link href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html" />
        <updated>2021-06-10T01:03:09.754501Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Throwback to when I first started using Linux</h1>
<p>These past weeks, I became quite nostalgic regarding my past computing, remembering the times that surrounded my initial days around beginning to use GNU/Linux as a, until then, Windows-only user. I briefly covered that story once in my <a href="https://diode.zone/video-channels/kzimmermann_podcast">Peertube Channel</a>, but only as a very brief introduction of the very "genesis" moment of it (plus it's pretty hard to record a podcast while playing a game properly).</p>
<p>It's important to record our own history and feel proud of it, however turns it took. It's no different with the software world, and I'll be sharing a little more about my own history here. Perhaps this will inspire new users to try their first GNU/Linux distribution, what to do or not when doing so, or only entertain seasoned users. At any rate, this sort of nostalgia is a fun passtime for me, so let's get it going.</p>
<h2>From tragedy a new life begins</h2>
<p>The date is February 2010. Windows 7 has been released for a few months now, PC sellers are rushing to substitute the failure of Vista with the new savior, while individual users are rushing to the stores to get over the train wreck. Apple launched an "affordable" line of Macbooks stripping down their price to "only" US$1000 to grab college students' pockets, and I'm sweating, staring at nothing short of a disaster.</p>
<p><strong>My laptop has died.</strong></p>
<p>Later, I would find out that it was only my hard drive that did, taking with it about 3 years' worth of data, but at this moment, I'm absolutely <em>torn</em>. No computer = end of the world in a digital sense. Game over. </p>
<p>What can I do now? Will I really have to resort to going to the library to do my assignments like a loser? Will I never again be able to work from the comfort of my room or chill by myself? Do I have to buy another computer and shell out precious and scarce money for it?</p>
<p>Luckily, my good friend next door, a "computer whiz" type for as much as I could infer at the time, had a spare laptop he could lend me for about a week until I got my laptop fixed. There was one condition, though: that machine ran "<em>Linux</em>." Was that okay by me? He asked. Seeing there would be no real alternative between that and having no computer at all in my spare time, I thought sure, what could be so bad, right? </p>
<p>Two things happened that afternoon: first, I discovered what a "netbook" was, a tiny, cute and very handy minilaptop. Second: I had my first experience with GNU/Linux - Ubuntu 9.04 to be exact. And just like that, there was I, someone who until then had never used any operating system besides Windows, diving head first into the world of GNU/Linux without any training wheels. </p>
<p>Painful experience? Surprisingly, <em>no!</em></p>
<p>Either due to sheer luck or extreme competence by Ubuntu, everything was familiar, easy and efficient. The EEEPC 900, though modest in specs, worked very snappily, and had all the software I needed to do work: my familiar Firefox browser, OpenOffice that resembled closely enough the MS Office version I had used so far. I was even able to run the Tibia MMORPG client on it due to a standalone binary being available for Linux. Was this "Ubuntu" thing all that Linux was? If so, it was way less scary than so many people made it look like.</p>
<p>The party was over sooner rather than later, though, because the Netbook I had purchased arrived at the end of the week and thus I had no longer a good reason to keep using my friend's. My knowledge, however, had been forever changed: and I'd be looking forward to using Linux again - and soon.</p>
<h2>"It's only when you've lost everything that we can have everything"</h2>
<p>In hindsight, one thing that inadvertedly helped me at that point was my lack of almost any useful or precious personal data. When my PC fried, silly me at the time basically lost everything, forever - only a tiny amount of stuff got backed up to either Dropbox or my 4 GB flash drives (huge at the time). That, however, helped me shed my fear of trying out Linux: if something went bad and I had to format the disk, what had I to lose after all?</p>
<p>Losing all your data is not a pleasant thought and thankfully today I keep better backups of everything, but at the time it worked as a springboard of motivation for trying out those "risky things" related to computing. Today, I think that might be one of things that keep so many newcomers from giving Linux a real try. </p>
<p>That's some good "food for thought" as to how we could make the "transition" for Linux virgins as painless as possible: what could we suggest so as to make sure their data is not at jeopardy, and maybe making restoring it as easy as a single-click?</p>
<p>Since I had a limited experience with Linux in that week, the next question was: which one should I try?</p>
<h2>Hop, hop, baby</h2>
<p>Soon I Googled (yes, <em>Googled</em> it!) "linux distributions for beginners" or something similar and thus began my lengthy adventure down the rabbit hole known as distro-hopping. To my relative surprise today, the first result (or at least the one I ended up clicking anyway) that appeared was <em>not</em> Ubuntu. Can anyone take a guess what it actually was?</p>
<p><strong>Fedora</strong>, actually. Which back then was releasing I think Laughlin. </p>
<p>Young me was very excited to download its ISO file, but then... what now? I still had not grokked that you had to do the burning to USB, plugging in, rebooting and choosing from the BIOS dance of Live Media, so I sat on the issue for a while. Cue in aforementioned Linux-mentor friend who explained to me the procedure, but also went ahead and mentioned the safety net of dual-booting: keep windows up the sleeve in case you don't like Linux.</p>
<p>That in itself introduced me to another problem: partitioning the disk to allow for both to coexist. During said attempt I naturally also messed that up and ended up with a messed-up system that would not boot anything again. Not giving up yet, I then decided to try again, but this time, I noticed another interesting distribution. This one promised the same great Ubuntu that I tried on my friend's, but suitable for Netbooks like my own. Which one?</p>
<p>That's right: <a href="https://distrowatch.com/easypeasy"><strong>EasyPeasy</strong></a>, the now-defunct conversion of Ubuntu for a netbook-oriented UI, was the first Linux distribution I ever installed and used from scratch on my own. And it was great: all the great software that I was used to was available as well, including Skype and even the then-unavoidable Flash player.</p>
<figure>
    <img src="https://upload.wikimedia.org/wikipedia/commons/d/da/EasyPeasy16.png" alt="a screenshot of EasyPeasy Linux" />
    <figcaption>
        The vanilla EasyPeasy UI. Seeing this green background theme and the applications being displayed like a mobile device again is so nostalgic for me!
    </figcaption>
</figure>

<p>I eventually got tired of the netbook UI and decided to try something that fit best a desktop, as I got an external monitor that didn't play very well with the UI. What were other distros out there that were friendly enough for me to try? Lots of them, actually. Distrohopping became commonplace for me, as though I was looking for a new home. </p>
<p>I then went on to try the other Ubuntu derivatives like Kubuntu (which at the time I regarded almost as a separate distro), afterwards PCLinuxOS which I found very interesting, got to know the amazing Puppy Linux (and made a habit to keep a LiveUSB of it with me ever since). I explored further the "netbook" fad with the Peppermint OS distribution and Crunchbang - this one introducing me to <a href="https://github.com/brndnmtthws/conky">conky</a>, another entire world of things to explore - and tried elive to see what this Enlightenment WM was all about. I gave Fedora a (short-lived) try again, and even tried GhostBSD for a change, but it would still be a good decade until I <a href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_linux_user.html">actually settled on BSD for good</a>.</p>
<p>Distrohopping was fun while it lasted, but I realized that I had to eventually get back to serious use with one of them. If Linux was to be a serious Operating System, It had to be reliable and constantly usable. Which distribution would fulfill that gap?</p>
<h2>Getting serious</h2>
<p>Getting both feet back on the ground, I decided to settle on <strong>Ubuntu 10.04</strong>. Come to think of it, it didn't actually have any "killer features" of usability over other mature, generic-computing distros like Fedora or PCLinuxOS, but the main reason I chose it was the familiarity with its tools available, which I mostly got to know from the EasyPeasy days.</p>
<p>The software center presented a good one-stop shop to pretty much everything I needed, and soon I learned that typing commands in the terminal was much faster and better. The look of GNOME 2.x was very appealing to me coming from Windows XP not because it was familiar, but because it was different - it invited me to explore it. And it didn't feel like a toy, something "built for the netbook" unlike EasyPeasy, but rather a full-fledged OS that would suport all my requirements. Later I discovered Compiz and the deal was essentially closed with that desktop environment.</p>
<p>A firm foothold was established, I settled down, wiped clean my windows partition and moved all my data. I had finally become a full-time Linux User.</p>
<h2>Reaching Linux-vana</h2>
<p>The rest, as they say, is history, but little did I know at the time that my adventure was just beginning. </p>
<p>I would still pass through many phases of use such as <a href="https://tilde.town/~kzimmermann/articles/not_sorry_free_software.html">Linux Fanboy</a>, Microsoft Basher, "Open Source Google" supporter - later turned basher as well - Ubuntu lover, Ubuntu hater, Single-distro-evangelist, <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">Terminal-only-aficionado</a> (this one still lasting today), and many other ones that resemble one's troubled teenage years. Looking back, I can say that after a few years I reached the "Nirvana" of Linux usage, which I can summarize as:</p>
<blockquote>
<p>Use whatever the hell suits you best, and be happy with it.</p>
</blockquote>
<p>Free Software gives us many freedoms, and it's my belief that the single biggest one of them, larger than any of the four listed on the definition, is the <strong>freedom to choose</strong>. There's no single-sized solution in Free Software, so you can use it and combine it however you wish. This also means that you'll have to <a href="https://tilde.town/~kzimmermann/articles/dontlikeitcreateit.html">spend some time learning it</a>, but that's the true joy of it.</p>
<p>Today, it's been about 11 years after that day my friend lent me his netbook, introducing me to the Free Software world. And funny thing is, I seem to have come back full circle with it by reproducing almost the same steps when I tried FreeBSD this year.</p>
<p>Who knows what turns could I have taken in my life had I not had the misfortune of losing my HDD at that exact moment? I can only wonder, but in hindsight, I'm thankful that it actually happened. Who knows what's reserved for my next 10 years of using Free Software? I sure don't, but I'm very excited to find out.</p>
<hr />
<p>How did you start using Linux in your life? Share your story with me in <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #13 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Fixing Gajim on the FreeBSD desktop</title>
        <link href="https://tilde.town/~kzimmermann/articles/fixing_gajim_freebsd.html" />
        <updated>2023-05-31T21:40:57.442280Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Fixing Gajim on the FreeBSD desktop</h1>
<p>A few weeks ago, I noticed that Gajim, my favorite desktop application for Instant Messaging via XMPP, stopped working in FreeBSD. I couldn't quite tell if this was due to an update to Gajim itself or to the base FreeBSD system, but after a specific update, it simply would not open anymore. No core dump, no message, nothing, apparently.</p>
<p>This was a real pity because I am yet to find something that works so well for cross-device messaging as this application. I stuck with other alternatives like <a href="https://profanity-im.github.io/">Profanity</a> that serve the gap of encrypted messageing well, but not so much with sending and receiving files, so I was a little upset regarding the loss of functionality.</p>
<p>Reinstalling the package didn't work, and neither did issuing <code>freebsd-update</code> pulls. Last week, then, I decided to get my hands dirty and investigate further the issue and gladly did resolve it with a quick hack. I realize it wasn't the most elegant or correct solution but it did work well - everything was functional and the broader system was not affected - so I'm taking it for now. Here's what worked for me.</p>
<h2>dbus at fault</h2>
<p>Upon running <code>gajim</code> from the command-line, I noticed the following stack trace outlining the problem at hand:</p>
<pre><code>...
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/gajim/application.py", line 233, in _startup
self.interface = Interface()
File "/usr/local/lib/python3.9/site-packages/gajim/gui_interface.py", line 2068, in __init__
music_track.enable()
File "/usr/local/lib/python3.9/site-packages/gajim/common/dbus/music_track.py", line 208, in enable
listener.start()
File "/usr/local/lib/python3.9/site-packages/gajim/common/dbus/music_track.py", line 61, in start
proxy = Gio.DBusProxy.new_for_bus_sync(
gi.repository.GLib.GError: g-file-error-quark: Cannot spawn a message bus
without a machine-id: Unable to load /var/local/lib/dbus/machine-id or 
/etc/machine-id: Failed to open file “/var/local/lib/dbus/machine-id”: 
No such file or directory (4)
05/31/2023 22:37:06 (W) gajim.gui.notification     
Notifications D-Bus not available: g-file-error-quark: Cannot spawn a
message bus without a machine-id: Unable to load /var/local/lib/dbus/machine-id 
or /etc/machine-id: Failed to open file “/var/local/lib/dbus/machine-id”: 
No such file or directory (4)
</code></pre>
<p>Python stack traces are not the clearest error messages around, especially if you're not the author of the program, but usually flow from the most generic part of the program until the most specific last - which is where the program actually crashed. Honing into the last bits of the message indicate that the problem lies with a dbus compatibility Python library used by Gajim that does the dbus-related abstraction and message-passing (required I think for things like notifications).</p>
<p>The problem is simple enough: the file <code>machine-id</code> presumably made available by dbus is not available in the usual locations that the Python library is used to searching. I don't know if this location difference stems from how the filesystem tree is laid out differently between Linux and FreeBSD (<code>/usr/</code> vs <code>/usr/local</code> prefixes, etc), but this is what's holding back the entire application from loading.</p>
<h2>Hack or fix?</h2>
<p>If you're looking for an immediate solution that will work (as I did), look no further than this:</p>
<pre><code># ln -s /var/lib/dbus/machine-id /etc/machine-id
</code></pre>
<p>And that's it. The <code>machine-id</code> file is nothing but a text file containing some sort of hash (sha1 maybe?) that acts as the unique identifier of the machine. By making this link (we could even just copy it) at the expected location, the file is found, read and everything up the stack sort of magically works...</p>
<p>... Except that this feels like an ugly hack. Rightfully so, I think. And so I was left with the question of whether I should look for another way to fix this. A software patch to issue in git, maybe? After all, this is plaintext Python code which I know a thing or two about. So I picked up the ball and started looking up the stack trace to find where I could fix the error in the source.</p>
<p>And here's where it got hairy: turns out that the "bug" was not within Gajim's code, but rather one of the modules it imported that interacted with dbus. The module's code attempts to detect the <code>machine-id</code> file and breaks when it isn't found. Ok, stop. Do I really wanna go and try to rewrite the whole library instead? I don't think that's in the scope of my original intentions. Who knows if I'll break some other application by fixing something to work only with Gajim? That's too much risk for me, not a usual developer, to take.</p>
<p>So bottom line I remained with my silly-but-perfectly-working hack to make Gajim work and sort of stuck with it.</p>
<h2>Conclusion</h2>
<p>Gajim still continues to work well with FreeBSD despite the latest versions produce a hickup of sorts to start. The fix, as ugly as it looks, is all you need to make it work, and I'm surprised it doesn't come "pre-fixed" like this by default.  Perhaps a better fix would be to mess a little bit with the post-install "hooks" scripts, which run right after the package contents are copied to the install locations, where the command above verbatim could be issued after the package in installed (and reversed when uninstalling).</p>
<p>This incident, and lack of response to it, highlighted for me a small detail on how the FreeBSD community doesn't put too much attention on things that belong on the desktop side of it, which is a real pity, given <a href="https://kzimmermann.0x.no/articles/freebsd_desktop_part_2.html">how well FreeBSD plays on the desktop</a>. Oh well, I guess people still think it's only suited for servers. Perhaps I'll need to keep spreading the word a little bit more! </p>
<p>Or who knows, maybe this can be my first software patch to a public project ever!</p>
<hr />
<p>Have you had trouble using Gajim on FreeBSD lately? How did you get around and fix it? Let me know in the <a href="https://fosstodon.org/@kzimmermann">Fediverse!</a></p>
<hr />
<p>This post is number #43 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Making OpenArena work again on Ubuntu 20.04 and Linux Mint 20</title>
        <link href="https://tilde.town/~kzimmermann/articles/fixing_openarena_ubuntu.html" />
        <updated>2022-01-15T23:26:04.566717Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Making OpenArena work again on Ubuntu 20.04 and Linux Mint 20</h1>
<p>Although I don't consider myself by any means a true "gamer," I know how to appreciate some older classics, especially if they are also Free Software. Case in point, <a href="https://diode.zone/c/kzimmermann_podcast">my Peertube Channel</a> is mostly gaming footage, which includes one of my all-time favorites: <a href="https://diode.zone/w/x94tVPScwRESYQ243geiUc">OpenArena</a>.</p>
<p>A fast-paced 3D first-person shooter based on the Quake 3 engine, there's plenty of fun to be had in playing OpenArena, even in machines with modest specs (I've played it successfully even in my 2006 machine). Single player has both campaign (think a StreetFighter progressive gameplay but shooting) and bots, and you can join a server to play against other people for a bigger challenge.</p>
<p>And out of such goodness, I was surprised to find out recently that my install of it on Linux Mint would fail to launch for some reason. It was silent, no flashing of a black screen or anything, so I opened it from the terminal just to check it out. Quite a few error messages were spewed, but the more pressing one was one that went like this:</p>
<pre><code>(...)
Loading vm file vm/ui.qvm...
File "vm/ui.qvm" found in "/usr/lib/openarena/baseoa/pak6-patch088.pk3"
...which has vmMagic VM_MAGIC_USE_NATIVE.
... trying pak6-patch088/ui
Loading DLL file /usr/lib/openarena/baseoa/pak6-patch088/uix86_64.so instead.
Loading DLL file: /usr/lib/openarena/baseoa/pak6-patch088/uix86_64.so
Sys_LoadGameDll(/usr/lib/openarena/baseoa/pak6-patch088/uix86_64.so) failed:
"Failed loading /usr/lib/openarena/baseoa/pak6-patch088/uix86_64.so: /usr/lib/openarena/baseoa/pak6-patch088/uix86_64.so: undefined symbol: __atan2_finite"
Failed to load DLL /usr/lib/openarena/baseoa/pak6-patch088/uix86_64.so.
----- Client Shutdown (Client fatal crashed: VM_Create on UI failed) -----
RE_Shutdown( 1 )
------- FBO_Shutdown -------
------- R_ShutdownVaos -------
------- GLSL_ShutdownGPUShaders -------
Hunk_Clear: reset the hunk ok
OpenAL capture device closed.
-----------------------
VM_Create on UI failed
</code></pre>
<p>Ok, not so bad, it's not an issue with the drivers or 3D renderers in this machine (which is quite weak on the GPU). But what is up with these libraries failing to load? They are the very ones that the package manager shipped!</p>
<p>After trying some things like "safe mode" in video and others, I decided to jump in and search around for a solution.</p>
<h2>The Fix</h2>
<p>My searching skills soon landed me in the package description page of OpenArena under Linux Mint, <a href="https://community.linuxmint.com/software/view/openarena">right here</a>, where one of the comments had a set of steps that fixes the issue. </p>
<p>Long story short, the libs provided by the repository are for some reason out of date with respect to the latest version of the executable as provided by the repo, and they can't be loaded properly. This causes the crash before loading. To fix it, you must manually compile the latest versions of the game's libraries (gotta love Free Software and Linux, right? <code>:)</code>) and manually replace the old ones with them.</p>
<p>Here are the steps to fix it:</p>
<p>Clone the <a href="https://github.com/OpenArena/gamecode">official OpenArena repo</a> contaning the core game code:</p>
<pre><code>mkdir openarena
cd openarena
git clone https://github.com/OpenArena/gamecode
cd gamecode
</code></pre>
<p>If you haven't got them already, install the required tools to compile programs from source (a standard Ubuntu/Mint/Debian install usually doesn't have them):</p>
<pre><code>sudo apt install build-essential
</code></pre>
<p>Build the latest version of the libraries. It's surprisingly not very large and took less than 5min here:</p>
<pre><code>make
</code></pre>
<p>Overwrite the existing old libraries of OpenArena with these ones you've just compiled yourself:</p>
<pre><code>sudo cp build/release-linux-x86_64/oax/*.so /usr/lib/openarena/baseoa/pak6-patch088
</code></pre>
<p>Now run <code>openarena</code> and watch the initial screen load as expected. Happy gameplay!</p>
<h2>Conclusion</h2>
<p>I feel it's nice to be able to go "hands-on" with Linux every now and then. I've heard countless times complaints that go like <em>"Linux will never make it into the mainstream because no one wants to fix things from a terminal!"</em> and, as usual, I just go <em>pfft</em>...</p>
<p>The imperfections of this system are its charm as well, and when I do this sort of maintenance I feel like a gardener tending his crops. It does suck when things don't work, but when you understand why and you can actually turn around and fix them, the satisfaction is enormous.</p>
<p>What do you think about the state of Free Software gaming nowadays, even if it has to have some small hacks to work? Let me know on <a href="https://fosstodon.org/kzimmermann">Mastodon</a>!</p>
<p>(And by the way, happy late new year!)</p>
<hr />
<p>This post is number #30 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Frankensteining a Salvaged Laptop to boost another (more Dumpster Diving shenanigans)</title>
        <link href="https://tilde.town/~kzimmermann/articles/frankensteining_salvaged_laptop.html" />
        <updated>2021-07-28T12:14:44.183060Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Frankensteining a Salvaged Laptop to boost another (more Dumpster Diving shenanigans)</h1>
<p>About a month ago, one of my ordinary <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">dumpster diving</a> adventures turned quite interesting with a discovery that was the equivalent of finding some gold while digging for water in a field: <em>a laptop, complete with all the parts laying bare in the trash.</em></p>
<p><img alt="laptop" src="/~kzimmermann/images/laptop_trash.jpg" /></p>
<p>Compared to my more regular findings of discarded parts, cables and maybe the occasional WiFi AP, finding a computer like this is downright the jackpot of the lottery, so you can imagine how I felt to uncover that one. Upon closer inspection, I came to find it was a <a href="https://www.lenovo.com/gb/en/laptops/lenovo/s-series/s310/">Lenovo IdeaPad S310</a>, a consumer-grade laptop built sometime in 2014-15 that appears to try to please more the looks than the hardware specifications themselves. </p>
<p>The design seems to have borrowed from the contemporary early Chromebooks, aiming for a super portable yet usable laptop for the casual internet surfer and movie watcher at home. It came with a Korean keyboard, which is not a problem since its layout is almost identical to the US 105-key one, but it has that annoying trend of switching the F-keys with desktop functions to change the volume or screen brightness, delegating the actual F-keys to a function+key combination, much to the chagrin of a Linux power user.</p>
<figure>
    <img src="/~kzimmermann/images/laptop_trash2.jpg" alt="details of the laptop keyboard" />
    <figcaption>
        This laptop follows the silly trend of substituting the actual F-keys with other default behavior. So if you have to use them, you have to press two keys instead of one!
    </figcaption>
</figure>

<p>Still, low specs should <em>never</em> be a reason to dismiss a computer in my book. This was evidenced, for example, in my recent adventure <a href="https://tilde.town/~kzimmermann/articles/old_pc_new_tricks.html">bringing back a 2010, Celeron single-core laptop into business usage</a>. And neither should the flashy pink color of it, because, although not my preferred color, dammit, that machine is <em>beautiful!</em>  </p>
<p>So I prepare a live medium of Artix into my trusty USB stick, and moved to start freeing that machine up.</p>
<h2>When not even free software can fix your computer</h2>
<p>Three minutes into the business, however, and I run into a wall: the monitor won't turn on. LEDs all over the laptop would still blink, and the fan would spin as well, so I know the machine ain't dead, but that monitor would still not even as much as flinch during the boot process. Oops?</p>
<p>Alright, I still have an external monitor and I could simply leave it tethered to it permanently, <a href="https://tilde.town/~kzimmermann/articles/laptop_buying_tips.html">turning it essentially into a desktop</a>. But at that point, I start to question my own tendency to accumulate stuff: I already have a "desktop-like" laptop hooked on my desk, would I really need to have another one there? In fact, there are limited free time hours in my day to use a computer - would I even be able to use both computers in a day to extract value from having another one?</p>
<p>In the end, I convinced myself that the minimalist's way in this case is the logical thing to do, and it's best to return that machine to the trash as my efforts could not salvage it - hey, had I known I little more in electronics and had a soldering iron, things could've turned more interesting! But before I did so, one thought kept tingling the back of my mind: are you sure there is literally nothing else I can do about this guy?</p>
<p>And then I considered the parts.</p>
<h2>Salvaging the remains of a moribund laptop</h2>
<p>Alright, so the hard drive seemed to still be in a usable state, but I'd have to test it elsewhere to find out, but would still be worth a shot, since I'm not keeping this laptop anyway. but as everything else is pretty much integrated, is there anything else that could be saved from it? I wasn't feeling brave enough to try to pry away the chipset, wireless card or fan.</p>
<p>However, there is still the RAM.</p>
<p>Come to think of it, this fella came with 4GB in a single channel, and uses a DDR3 technology that still seems to be abundant in the wild. And just as I was thinking about what could I do with that spare RAM chip, my thoughts turned out to yet another machine that I had salvaged months earlier from the same trash: my Lenovo Celeron Laptop that I used when <a href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_inux_user.html">first trying out FreeBSD</a>. The two are also from a similar age range. Could their RAM be compatible?</p>
<p>According to the <a href="https://www.lenovo.com/gb/en/laptops/lenovo/s-series/s310/">official Lenovo documentation</a>, <strong>yes they are.</strong></p>
<p>That's it, I'm going to Frankenstein this broken laptop into my currently weak laptop by buffing it up with additional memory. This would make it hold a whopping 8GB, enough to do all my intended work over there with very little requirement for swapping. The first step is to take apart the lid from the rest of the laptop. Usually this means removing some screws from the back of the laptop, but for some reason, with this computer having a completely flat back I can't find anything to remove. </p>
<p>Thankfully, Lenovo provides a very good <a href="https://download.lenovo.com/consumer/mobiles_pub/ideapad_s310_hmm.pdf">end-user manual</a> that describes how to pretty much disassemble the entire thing, which helped quite a lot. Turns out that the screws are hidden beneath the rubber "feet" of laptop, which you have to pull out to open. Unsurprisingly, when you do so, you're faced with those classic "VOID IF REMOVED" stickers preventing user repair, but the machine is broken already anyway, so fuck it.</p>
<figure>
    <img src="/~kzimmermann/images/screw_detail.jpg" alt="opening the rubber feet of the laptop" />
    <figcaption>
        To remove the screws holding the back of the lid together, you'll need to first open the rubber feet of the laptop. Clever, but annoying obfuscation trick.
    </figcaption>
</figure>

<p>After this, the rest of the process was pretty easy: find out where is the HDD casing, unscrew it, detach it, remove the drive from the casing, and that's it. For the memory, it was a matter of detaching the chip from the prongs that held it and removing it from the bay. Parts salvaged successfully!</p>
<figure>
    <img src="/~kzimmermann/images/disassembled.jpg" alt="parts of the computer disassembled" />
    <figcaption>
        Everything ready to be transferred to the new host.
    </figcaption>
</figure>

<h2>More power to the old computer</h2>
<p>The "transplanting" part of the process was pretty easy. The only notorious issue was that as my original laptop was running a 32-bit version of <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a>, it wouldn't detect the rest of the RAM I provided, hence my need to update it to 64-bit. This also took care of <a href="https://tilde.town/~kzimmermann/articles/graphical_desktop_alpine_3.14.html">updating it to the latest version of 3.14</a> which also taught me a few things about how it had changed between these versions.</p>
<p>My alpine laptop now runs confortably on 8 gigs for an already pretty lightweight operating system, which makes it really fly in comparison to other systems I had used in the past. This RAM boost also raises another interesting point: this machine now is on par as far as memory is concerned with other contemporary machines I have at work or even those available as new at stores. In other words, it's now a keeper machine, not just an experimental tester one.</p>
<p>It's really exciting to have done what was essentially obtain a modern machine completely for free from parts recovered from the dumpster. Point for the environment, one computer less discarded and one less computer bought from a store. Point for my finances for not spending a single dime for hardware, and point for Linux again for enabling everything to work smoothly.</p>
<hr />
<p>Have you ever "frankensteined" a machine for parts and upgrades into another one? How did it work? Did you use an older machine of yourself, or another relative's one? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #23 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>FreeBSD on the Desktop: the Saga continues</title>
        <link href="https://tilde.town/~kzimmermann/articles/freebsd_desktop_part_2.html" />
        <updated>2021-04-09T09:58:19.811228Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>FreeBSD on the Desktop: the Saga continues</h1>
<p>Just over a month ago, I made a decision changed my computing approach pretty much forever - <a href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_linux_user.html">I started using FreeBSD</a>. At the time, FreeBSD was only a small curiosity of mine, a little side project that I wished to engage in order to broaden up my knowledge with different OSes, and I had no real intention on switching or leaving GNU/Linux at all.</p>
<p>Fast forward to today, and I've been using FreeBSD as my daily driver for a few weeks already, and my experience with it has been quite amazing as a lightweight, highly customizable OS - as long as you do the homework. And though I don't think I'm just quite ready to switch completely from GNU/Linux to FreeBSD yet, I believe that could be possible within the rest of the year.</p>
<p>Following my previous post on beginning FreeBSD, I decided to explore it further and properly take it from a command-line hobby and <strong>turn it into a Desktop OS</strong>, with as much usability as a modern Linux distro can provide. It was a small challenge even with the presence of great documentation in manuals, Handbook and even community assistance - not to mention that it's still not 100% complete to my taste. However, after all this effort today I can say at least that my machine is entirely compatible and usable as a desktop, and I am not afraid to daily-drive it anymore, perhaps except for the limitation of RAM in the machine, as I <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">got it for free from the trash</a>.</p>
<p>In this post, I'll share what were the steps I took to do this conversion and my insights as to how usable FreeBSD can be on the Desktop. Let's go.</p>
<h2>My requirements for a modern desktop OS</h2>
<p>Although a heavy <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">command-line afficionado</a>, I must admit that staring and living on the terminal all the time can be a little tiring, if not straight-out boring. Sorry terminal-lovers, but a modern and complete desktop <em>requires</em> a graphical environment, as well as a few other things. So when I set out for this project, I had the following requirements in mind:</p>
<ul>
<li>Have a Graphical Environment available, full-DE or not.</li>
<li>Be able to manage power and sessions (login or out, screen locking, suspend etc)</li>
<li>Be able to manage multiple types of connections, especially WiFi</li>
<li>Manage and install software with a good deal of control</li>
</ul>
<figure>
    <img src="http://peppertop.com/elvie/wp-content/uploads/2014/08/Elvie_003_en-GB.jpg" alt="elvie comic #3" />
    <figcaption>
        If anything, a GUI allows you to run both text-mode and graphical applications...
    </figcaption>
</figure>

<p>I must say now that not everything in this list is 100% implemented at this point in my machine yet, but it's enough to satisfy my usage at this point. Let's see how we can enable these post FreeBSD installation.</p>
<h2>Getting the graphical environment ready</h2>
<p>As per my likeness for the console, I don't have a need for a full Desktop Environment, and would rather have a minimalistic window manager do the job instead, saving precious RAM and CPU cycles instead. </p>
<p>My original choice of Window Manager was <a href="https://awesomewm.org/">awesome-wm</a>, but I eventually decided to try something different, going back to floating window managers, and I finally settled with good ol' <a href="http://fluxbox.org/">FluxBox</a>. There's a common requirement to them, though, and that is Xorg.</p>
<p>Installing Xorg is easy, just go:</p>
<pre><code>pkg install xorg
</code></pre>
<p>And the process is quick and simple, pulling in all dependencies and lasting only a couple of minutes. Configuring Xorg to work with your system, though, might require a little more work. As my machine was an Intel laptop, I had it "easier" due to the Integrated Graphics GPU being almost universal in all laptops, and the recent addition of DRM (Direct Rendering Management, <em>not</em> to be confused with the <a href="https://tilde.town/~kzimmermann/articles/drm_or_piracy.html">other type of DRM</a>) into FreeBSD. Activate it by installing <code>drm-kmod</code> with <code>pkg</code> and add this line to <code>/etc/rc.conf</code>:</p>
<pre><code>kld_list="/boot/modules/i915kms.ko"
</code></pre>
<p>Reboot and watch the screen flash briefly as the kernel module is loaded during the process. Test that it works by running <code>startx</code>: if you see a very simple window manager (twm, actually) and graphical applications load, Xorg is working correctly. I hear that desktop users with more advanced GPUs might have a little more headache to configure them correctly, including editing some xorg subconfiguration files, but magically, I was spared from this.</p>
<p>Launching a minimalist window manager from the console is also pretty easy: copy the file <code>/usr/local/etc/X11/xinit/xinitrc</code> as <code>$HOME/.xinitrc</code> and change the lines concerning <code>twm</code> execution for your window manager. Using FluxBox, for example, I only have to add:</p>
<pre><code>fluxbox
</code></pre>
<p>To <code>.xinitrc</code> and the fluxbox init script takes care of the rest. Run <code>startx</code> again, and see that it loads correctly with your window manager of choice.</p>
<p>After this step, pretty much everything works like in a graphical Linux environment, with perhaps the only other missing component being sound. Sometimes, FreeBSD will guess wrongly which output your sound is supposed to come out from, and you have to direct it manually to the right one. For example, if you're not receiving sound from the 3.5mm audio jack, <a href="https://tilde.town/~kzimmermann/updates/20210302_0613.html">check out where is the sound being sent to</a> first:</p>
<pre><code>sysctl hw.snd.default_unit
</code></pre>
<p>If it comes out as <code>0</code> and isn't working, try setting to <code>1</code> or vice-versa (as root). I find that always setting it to one or another doesn't work every time, but thankfully once you check it and set it correctly, it persists until next boot. Clearly not as consistent as Linux here, perhaps, but still very much usable, and not an issue at all.</p>
<h2>Power and session management</h2>
<p>Suspending and resuming the machine was a sort of an inconvenient question for me which, thankfully, turned out to be quite straightforward. I don't know why when you <a href="https://search.mdosch.de/search?q=how+to+suspend+freebsd+laptop">search for "How to suspend FreeBSD" online</a> the results give you the impression that this operation has shaky support. Even the <a href="https://wiki.freebsd.org/SuspendResume">official documentation</a> doesn't sound too confident on the procedure, but maybe it's the wording they use. </p>
<p>I don't know if it was just me again being lucky with my hardware, but the solution is simple:</p>
<pre><code>acpiconf -s 3 # as root
</code></pre>
<p>Or, even simpler, a no-brainer even:</p>
<pre><code>zzz # as root.
</code></pre>
<p>If you wish to <em>hibernate</em> the session to disk (and safely run out of power to resume afterwards), change <code>-s 3</code> to <code>-s 4</code>. This is also a good time to configure your user permissions with <code>sudo</code>, to which I grant passwordless permission for <code>acpiconf</code> because I'm the only user and want to suspend it quickly. There should be something similar for <code>doas</code>, but I don't use it.</p>
<p>To change the brightness in this laptop's display, I installed the package <code>intel-backlight</code>, and could change the brightness by issuing:</p>
<pre><code>intel_backlight X
</code></pre>
<p>Where X is a number between 0 and 100. Again this is not as convenient as Linux's integration of laptop keys, but I'm sure a script could be derived to handle progressive increase and decrease, and combined with the appropriate keybinding. As it is, though, it's fine by me.</p>
<p>Lastly, how do I lock the session when I'm away? I don't use a display manager like SLiM or LightDM to start the session, so I went raw again and used xscreensaver. I bind Meta+L as a locking combination a-la Windows, and it does the job.</p>
<p>You might have noted that <em>none</em> of these are as well-integrated as a desktop Linux distribution at this point yet (for example: where's the power manager to respond to lid events), but it's usable and still practical. I'm sure that if I did a little more homework, I'd be able to figure it out, but for me is fine as it is right now. Other things, however, are not so fine, for example...</p>
<h2>Dealing with WiFi</h2>
<p>This is a biggie. And a potential dealbreaker for many new users.</p>
<p>Upon install, the FreeBSD installer attempts to detect and configure one network interface for you, and it can be your laptop's wifi (so long as drivers exist). I may have been lucky, but mine was autodetected, and "magically" the wifi worked, and has been working upon every boot.</p>
<p>Behind the scenes, however, there is no magic. Clearly, FreeBSD is doing <em>something</em> to get the wifi up upon boot, and we will have to do the same thing if we're connecting to a different wifi network. And we will because this is a laptop, not a server.</p>
<p>Is there a GUI tool that manages this? <a href="https://www.freshports.org/net-mgmt/networkmgr/">You bet</a>, but for some reason it didn't work for me. So I had to go raw again.</p>
<p>Turns out that in both FreeBSD and Linux alike, the de facto way to get connected to WiFi with WPA security is to use the <code>wpa_supplicant</code> command. The process is as follows:</p>
<p>If you know the ESSID (name) of your WiFi network, generate a hashed passphrase with <code>wpa_passphrase</code> command:</p>
<pre><code>wpa_passphrase ESSID_of_network passphrase_of_network
</code></pre>
<p>This command prints out something like this:</p>
<pre><code>network={
    ssid="john"
    #psk="password12345"
    psk=ae04ab09fe8c8a1bdc9a9fb9d41611a16fe50053c379e1bd759231a324f739a3
}
</code></pre>
<p>Add this bit to the file <code>/etc/wpa_supplicant.conf</code> (you can safely remove the plaintext passphrase from it) and save. Terminate any <code>wpa_supplicant</code> process that might be running, as root:</p>
<pre><code>killall wpa_supplicant
</code></pre>
<p>Now restart it with this command:</p>
<pre><code>wpa_supplicant -B -Dbsd -iwlan0 -c /etc/wpa_supplicant.conf
</code></pre>
<p>The <code>-Dbsd</code> flag specifies which driver (<code>bsd</code> in this case) should be used. My machine worked fine with the BSD one, but if you wish to use another, see the options for your machine listed with <code>wpa_supplicate -h</code>.</p>
<p>You now have made the connection, but might not have an IP address yet (no way to send or receive packets). To fix this, run dhclient to acquire a new lease:</p>
<pre><code>dhclient wlan0
</code></pre>
<p>A big problem I ran into here was that, even though I had received an IP and was able to ping the router at this point, I could not access hosts outside the LAN. There were no routes defined. I'm still not 100% sure on how to fix this, but this is what worked for me so far:</p>
<pre><code>route change default &lt;the default gateway&gt;
</code></pre>
<p>This will change the routing table to point at your default gateway (the router), which should know what to do with the packets. And if this doesn't do the trick, killing and restarting <code>wpa_supplicant</code> once again does the trick (go figure).</p>
<p>This <em>is</em> a big point and I feel that could be done better in FreeBSD. The steps are probably scriptable, yes, but why not make it into a graphical, systray-embedded application (that works)? I'm sure that many new users would be discouraged at this point.</p>
<h2>Managing software</h2>
<p>Ports are interesting, offer much flexibility, and sometimes are the only way you can use obtain certain functionality in FreeBSD, like MP3 support with ffmpeg. Packages, however, are much faster, practical, and easier to upgrade.</p>
<p>Given that in most cases the default behavior of software is absolutely fine by me, I chose to just use packages instead and it's been working fine. So if you come from a Debian/Ubuntu background, look into man pkg(8) and it's pretty familiar.</p>
<h2>Conclusion so far</h2>
<p>FreeBSD can be and is a solid OS for the Desktop usage - GUI or not. However, it does require you to <em>do the homework</em>.</p>
<p>After about just over a month and half of usage, though, discovering FreeBSD has finally stopped being "homework" and finally started becoming a joy on its own. It's efficient, lightweight and, perhaps more importantly, does the job exactly as it's told. I'm no longer "fearing" it, even though there's probably a lot more that I haven't learned and I'll face in the future, but I feel I'll be more prepared when it does.</p>
<p>As I learn more stuff, I'll be sharing it here as more snippets. Be sure to subscribe to the <a href="https://tilde.town/~kzimmermann/articles/updates.xml">Updates feed</a> so that you get them as I post here!</p>
<p>And also, I couldn't help but revisit that previous meme:</p>
<p><img alt="Meme revisited" src="https://tilde.town/~kzimmermann/images/freebsd_revisited.jpg" /></p>
<p>Do you use FreeBSD on the desktop? How do you work around the issues pointed here? Let me know in Mastodon!</p>
<hr />
<p>This post is number #11 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Enabling Fullscreen on a FreeBSD guest VM</title>
        <link href="https://tilde.town/~kzimmermann/articles/fullscreen_freebsd_guest_virtualbox.html" />
        <updated>2021-04-05T03:21:38.766484Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Enabling Fullscreen on a FreeBSD guest VM</h1>
<p>Perhaps not much of a <a href="/~kzimmermann/articles/updates/20210318_1251.html">"learn FreeBSD one command at a time"</a> this time, but still useful for those who wish to start learning FreeBSD without a dedicated machine.</p>
<p>Once you install FreeBSD as a guest VM in VirtualBox, you might notice that you are limited to a 1024x800 resolution once you go graphical, which is sort of annoying and loses that feeling of desktop integration. If you search for a solution, many posts and sites appear to offer a solution, but as I've found out, many are wrong, including the <a href="https://docs.freebsd.org/en_US.ISO8859-1/books/handbook/virtualization-guest-virtualbox.html">FreeBSD Handbook</a> this time.</p>
<p>I thought that solving this involved going low-level and fiddling with several config files, but turns out the solution was much more low-tech, and I think applies to pretty much any guest-host combination. Here it is:</p>
<ol>
<li>On the <em>Guest</em> machine, install the <code>virtualbox-ose-additions</code> package. This is the package name under FreeBSD, for other OSes, look for something like "VirtualBox Guest Addons." </li>
<li>On the FreeBSD guest, add <code>vboxguest_enable="YES"</code> and <code>vboxservice_enable="YES"</code> to <code>/etc/rc.conf</code>. Other OSes might need to have the service enabled another way.</li>
<li>Power down the guest machine.</li>
<li>On the <em>Host</em>, open the VirtualBox Manager and set the Graphics Controller to <code>VBoxSVGA</code>, and enable 3D acceleration. <em>Be careful:</em> Virtuabox might complain that the settings aren't compatible and reset it to the default after you close the window. If this happens, set it from the "outside" without clicking the wizard (worked for me), or from the command line.</li>
<li>Power on the guest and change your graphical settings with <code>xrandr</code>.</li>
</ol>
<figure>
    <img src="/~kzimmermann/images/vbox_setting_gui.png" alt="where to change the graphics settings in vbox" />
    <figcaption>
        Instead of opening the wizard and configuring it, click this setting and change it straight from here. Otherwise VirtualBox overwrites it with the default.
    </figcaption>
</figure>

<p>There you go. No need to manually edit <code>xorg.conf</code> on the guest, or install DKMS drivers as so many forum posts and other documentation insisted. This simple, low-tech solution worked perfectly after spending almost one weekend of beating around the bush...</p>
<p>Have you ever tried using a fullscreen session on a FreeBSD VM before? How did you enable it? Share with me at <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #9 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Playing free software Doom with FreeBSD and Alpine Linux</title>
        <link href="https://tilde.town/~kzimmermann/articles/getting_doom_right_alpine_freebsd.html" />
        <updated>2022-08-21T20:39:35.763537Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Playing free software Doom with FreeBSD and Alpine Linux</h1>
<p>To say that <a href="https://en.wikipedia.org/wiki/Doom_(franchise)">Doom</a> has a lasting legacy into the hacker and gaming communities is a huge understatement. ID Software released the source code for its game engine in 1997, and from that point and on, game engines like <a href="https://www.chocolate-doom.org/">Chocolate Doom</a>, <a href="https://www.zdoom.org/">ZDoom</a> and an infinity of mods have arised. </p>
<p>The hacker and modding community has kept the 1993 game well alive and thriving - true to the model of Free Software. There is even a quite impressive and ambitious new engine project called <a href="https://zandronum.com/">Zandronum</a>, which is natively and heavily optimized for multiplayer a-la Quake, which shows us how to really stretch the limits of a game engine - while still keeping it old school and quite lightweight!</p>
<figure>
    <figcaption>
        Yup, that's exactly what you're thinking. One of the craziest Doom mods around: <a href="https://cutstuff.net/mm8bdm/">MegaMan 8-bit Deathmatch</a>. Can it still even be considered "Doom" at all?
    </figcaption>
</figure>

<p>Yet, even with all this greatness, I found myself struggling a little to get this game to work, first when I picked up FreeBSD again for a spin this week, and again with Alpine Linux. </p>
<p>The problems is that, unlike other "self-contained" games such as <a href="https://diode.zone/w/fwgwj2xnz6nhjW5GM2GVXP">AssaultCube Reloaded</a> or <a href="https://diode.zone/w/x94tVPScwRESYQ243geiUc">OpenArena</a> where the moment you install the game you are good to run it, playing Doom or its mods is a two-part piece work:</p>
<ol>
<li>First, decide and install an <em>engine</em> - that is; the software that will get the game logic and rules running - then:</li>
<li>Acquire <em>data files containing the game's content</em>, also known as a <a href="https://doom.fandom.com/wiki/WAD">WAD file</a>.</li>
</ol>
<p>This confuses beginners who have never seen how Doom mods work. Because of the several possibilities to perform both (1) and (2) above, no package can truly fulfill the goal, thus requiring some user intervention to make everything work. This, at least for me, was a little painful to work out because of a chain of small difficulties adding up, including but not limited to:</p>
<ul>
<li>When you search, forum posts are mostly centered on development rather than troubleshooting.</li>
<li>Chat support for most projects hosted in <a href="https://tilde.town/~kzimmermann/articles/walled_garden_problems.html">Discord rather than IRC</a>.</li>
<li>Error messages from the engines sometimes are not clear about what went wrong, sometimes silently fail.</li>
<li>Incompatibility between engine and WAD files (1 and 2 above).</li>
</ul>
<p>After butting head with a wonderful and modern engine - ZDoom - I'll show you the step-by-step that worked for me to get the classic <a href="https://freedoom.github.io">FreeDoom WAD</a> up and running.</p>
<h2>Getting an engine that works for you</h2>
<p>For starters, there are quite a few Doom engines available for you to install - even different ZDoom engines as well! For example, this is FreeBSD 13.1-RELEASE's repo:</p>
<pre><code>% pkg search doom
chocolate-doom-3.0.1           Doom/Heretic/Hexen/Strife engine port compatible with the originals
crispy-doom-5.10.3             Enhanced-resolution Doom source port based on Chocolate Doom
doom-data-1.0_1                Shareware data files for Doom, Doom II, Hexen, Heretic, and Strife
doom-freedoom-0.12.1           Complete Doom-based game IWAD that is Free Software
doom-hacx-1.0                  Full TC using the Doom II engine
doom-hr-1.0_1                  Hell Revealed is a megawad, a 32-level replacement for DooM II
doom-hr2-1.0                   Hell Revealed II is a megawad, a 32-level replacement for DooM II
doom-wolfendoom-1.0            Wolfenstein 3D levels ported to Doom II
doomlegacy-1.48.8_1,1          Improved and extended version of Doom
doomsday-2.3.1_4               Enhanced Doom, Heretic, and Hexen source port
gzdoom-4.7.1_1                 GL-enhanced source port for Doom-engine games
linux-doom3-1.3.1.1304,1       Doom III for Linux
linux-doom3-demo-1.1.1286_4    DOOM III demo for Linux
zdoom-2.8.1_8                  Source port for Doom-engine games
</code></pre>
<p>With so many packages with similar descriptions, which ones should you use?</p>
<p>The short answer is: the ones that work best for your setup! The long answer requires some testing.</p>
<p>First, not all engines are equal. This is true both in features (classic gameplay or modern FPS-style?) and in software/hardware requirements (OpenGL, SDL, etc). The result is that sometimes an engine can install, but will fail to run properly - not even loading correctly! After trying many of the packages above, I settled on the <code>gzdoom</code> engine, which runs correctly with the OpenGL stack in FreeBSD. Thanks to <code>pkg</code>, installation was very simple:</p>
<pre><code># pkg install gzdoom
</code></pre>
<p>(In Alpine, because OpenGL didn't work well, I settled for an Alpine-only <code>lzdoom</code> package which uses legacy graphics, but works just as well)</p>
<p>Another more subtle but annoying issue that came over was that even though each of the engines on the surface appear to depend on the same set of data files (like <code>brightmaps.pk3</code>, <code>game_support.pk3</code>, etc), some of them are engine-specific and vary slightly from one engine package to another despite having the same name! That variation is enough to crash so my recommendation is this: when switching doom engines, <em>remove the previous one completely</em> with <code>pkg remove</code> before installing the other one. Thankfully, the install size isn't large so the process is fast.</p>
<p>Then a second problem came in: the package in FreeBSD pulls in a <code>doom-data</code> package as a dependency that contains WADs for the original Doom Shareware version of way back. And that Shareware game is annoyingly <em>played by default</em> every time you choose to play - even if you install a WAD like FreeDoom afterwards.</p>
<p>Time for more head-butting.</p>
<h2>Installing (and finding) the WAD files</h2>
<figure>
    <img src="https://static.doomworld.com/monthly_2018_06/stfouch0-corrected-after.png.1c5bffb88d61764f0ec092ccef366530.png" alt="the freedoom guy's (player's face) ouch face" />
    <figcaption>
        A first timer's face when trying to make WADs play together nicely with Doom engines.
    </figcaption>
</figure>

<p>GZDoom's error messages were very unhelpful trying to debug why-oh-why couldn't the engine locate the <code>freedoom1.wad</code> file and use it instead of the original copyrighted shareware. Searching around for documentation was not very fruitful either (dev stuff, not configuration or troubleshooting). </p>
<p>Eventually, this is what worked for me:</p>
<ol>
<li>Download the FreeDOOM WAD files yourself from <a href="https://freedoom.github.io/download.html">the project's website</a>.</li>
<li>Extract the WAD files to a same single directory. Say, <code>~/.local/share/wads</code>.</li>
<li>Open a shell, go to that directory. Ex: <code>cd ~/.local/share/wads</code>.</li>
<li>Run your Doom engine from there: <code>gzdoom</code></li>
</ol>
<p>Here's the thing: these Doom Engines look for WAD files first sitting in the directory they're called (cluttering places like <code>/usr/local/bin/</code> together with the binaries) and since the Shareware is right next door, it gets called in first. If you call it from a directory full of WADs first, you get to choose - complete with a GUI menu!</p>
<p>So there you go. A script like this <code>doomlauncher.sh</code> can work wonders:</p>
<pre><code>#!/bin/sh

# Change to your WAD repository:
WADS="~/.local/share/wads"

cd "$WADS"
gzdoom
</code></pre>
<p>Call this script instead of gzdoom directly, and happy fragging!</p>
<h2>Conclusion</h2>
<p>Whew! Though the separation between engine and content may cause some confusion at first, it does give us a pretty large source of games to choose from. Plus, there is nothing that some good ol' shell scripting can't solve, especially after you understand what is going on under the hood. FreeBSD is now fun again with more old school-but-revamped gaming and with extensive modding possibilities, and likewise for Alpine!</p>
<p>If you're looking for more mods and WADs to try out, definitely check out <a href="https://doomwiki.org/wiki/Fan-made_Doom_games">the DoomWiki pages on them</a>. Older archives such as <a href="https://www.doomworld.com/">Doomworld</a> are pretty dated and hard to find interesting things there. Also, do try several engines to see which ones are better for your machine!</p>
<hr />
<p>Have you tried playing Doom and its derivatives on FreeBSD or Linux? What Engine + WAD stack did you use? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #34 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Time for #AskFedi: what should I about the CCC congress before going?</title>
        <link href="https://tilde.town/~kzimmermann/articles/going_to_37c3_questions.html" />
        <updated>2022-10-17T21:47:02.614735Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Time for #AskFedi: what should I about the CCC congress before going?</h1>
<p>Ok Fediverse, here's the thing: as the <a href="https://events.ccc.de/category/37c3/">37C3 seems to be confirmed at this time</a>, I'm thinking about attending it for the first time ever. After all, it's my first time living in Europe, and even if not in Germany, I've never been as close to it as right now. And seeing that this is also their first in-person edition since the pandemic, time seems right.</p>
<p>Naturally, I have a lot of questions as of how the event is structured that I couldn't exactly find in the CCC website. The best page I could find on the subject is <a href="https://events.ccc.de/congress/2016/wiki/Static:FAQ">this wiki from the 2016 edition (33C3)</a>, which seems to be the latest (or maybe later editions didn't bother with doing a Wiki over again). So, if you've attended a previous CCC event, or are a regular in the business, I'd love to hear your thoughts on the following points:</p>
<ul>
<li>How much do the tickets for the event cost? The wiki states they started at 100 EUR, but that was for 2016. Their <a href="https://tickets.events.ccc.de/RC3-21/">ticketing system</a> also hasn't been updated to this years' event as of this writing. I guess they can't settle for free due to having to rent that (seemingly huge) venue.</li>
<li>Do the tickets allow entrance for all 3 days of the event?</li>
<li>Is there a list of booths or spaces for the Free Software projects that participate in the event? I'd love to know who's there beforehand.</li>
<li>How "isolated" is the congress center from the greater city of Hamburg? I'm asking this because I plan to travel with my partner who isn't a big fan of computers or hacking, and probably would prefer to sightsee around the area instead. Anybody familiar with the city knows if there's interesting things to do out of CCH?</li>
<li>Should I bring my laptop with a "clean install" (i.e. a throwaway hard drive) so that if I "catch" anything in the convention wiping it out will be no problem? I mean, I heard stories of people bringing malware home from DEFCON or Black Hat.</li>
<li>Does a lot of social interaction in person goes on in the congress? Can you get a few contacts to add in your jabber roster afterwards? Or is it more of a "us and our computers" kind of convention?</li>
</ul>
<p>Luckily the Wiki has lots of other relatively immutable information (like restaurants, hotels, etc) that I believe should still be the same. But truth is, I have questions about events of this sort in general, since I almost never been to any. So I guess there is a last, but the biggest question of all for you guys:</p>
<ul>
<li><strong>Are you going to attend the 37C3 and can I meet you there?</strong> <code>;)</code></li>
</ul>
<p>Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a> Thank you for any reply and answers regarding these questions. I hope the event goes forward as planned and I can meet you there! </p>
<hr />
<p>This post is number #38 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Is this app by Google the world's first legal ransomware?</title>
        <link href="https://tilde.town/~kzimmermann/articles/google_app_ransomware.html" />
        <updated>2020-11-16T06:56:05.377200Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Is this app by Google the world's first legal ransomware?</h1>
<p>A few days ago, an article at the XDA developers website raised lots of eyebrows claiming that Google had created <a href="https://www.xda-developers.com/google-device-lock-controller-banks-payments/">an app that allowed remote locking and disabling of an Android phone if one of its installment payments failed to get through</a>. </p>
<p>Although quite shocking as a headline, this really should not come much in surprise for any of us. After all, financing has always carried its risks, and a reposession of other goods such as a house or a car have been understood as an acceptable consequence of credit mismanagement. And even on the Technology side, there's nothing <em>really</em> new being employed here: "remote management" has been in place for things like Windows PCs since the dawn of the modern office, and even corporate smartphones have some form of remote administration that can brick a device in certain cases.</p>
<p>And yet, that sort of article reads like a horror campfire spook story. It's terrifying. But why?</p>
<p>My answer is that this appeals to our loss of something that we see as so personal, so private, yet so under not our control. We've seen this before at one time with yet another problem with Windows in 2016: <a href="https://en.wikipedia.org/wiki/Petya_(malware)"><em>ransomware</em></a>. You wake up one day, turn on your computer only to find out that your entire hard drive has been encrypted with your data out of your reach now. Panic ensues as you realize that your data while so close to you is simultaneously so far away from your reach.</p>
<p>And likewise, this app by Google does a similar chilly reminder to financed phone users: "<em>go on, carry your phone with you, use it as much as you want - just remember it's not really yours until you finish paying. And if you don't, I'll remind you again about who really owns it.</em>" Following this definition, I could very confidently say that Google has created <strong>the world's first legal ransomware</strong>. Pay up, or lose your device.</p>
<p><img alt="Petya: class of ransomware that set the backdrop for this entire family of malware with its scary messages" src="https://upload.wikimedia.org/wikipedia/commons/5/58/PetyaA.jpg" /></p>
<p><em>Petya locks up your computer if you don't pay; Google locks your phone. Coincidence?</em></p>
<p>The creepiest part is that this model, or this "framework," is that it doesn't necessarily have to stop there: if financed phones can be locked up remotely, why can't your financed car that has "smart devices" built into it? Or your financed house, with a "smart door" that will prevent you from coming inside (or leave) if you haven't paid this month's installment or rent? </p>
<p>In fact, just scuttle the whole "financed" bit altogether: let's make everything pay-per-use like a giant jukebox. You pay a monthly fee to use your computer, your TV (even open-air) has a subscription to just be able to turn on. Everything is now a paid service, no more products in the sense of ownership!</p>
<p>So this leaves the question: how can we protect ourselves better from this kind of threat? What does this all teach us?</p>
<p>First and foremost, the same answer as usual: <strong>using free software matters</strong>, and this is especially true depending on the platform you're using. This sort of threat coming from a PC manufacturer, for example, would have been pretty empty and even laughable, as in a desktop, we can install whatever software we want quite easily, sometimes going all the into the <a href="https://libreboot.org/">bootloader</a>. </p>
<p>Thanks to the amazing <a href="https://fediverse.party">Free Software community</a>, we have a huge ecosystem for this, and we can thrive in complete freedom from our computers. we don't have to cower and run when some bully like Microsoft tries to shut down the PC competition with <a href="https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-secure-boot">questionable security practices</a>. When it comes down to a smartphone, however, that's when things get shaky.</p>
<p>Attempts to "free Android" (even though it's <em>ahem</em> Open Source) have had mixed efficiency, and some of the more popular like <a href="https://upload.wikimedia.org/wikipedia/commons/5/58/PetyaA.jpg">Cyanogenmod</a> have been discontinued, and did not even try to replace everything with the Free Software stack. Money-backed initiatives to introduce free software to phones (like the Ubuntu Touch) have not succeeded either in the long run. From this sad state of Free Software on mobile, I can only derive one conclusion: if you want computing freedom, <strong>do not use a phone</strong>. You simply <em>cannot expect any consistent software freedom</em> when using a mobile device.</p>
<p>I know this is easier said than done, but it's the truth even in 2020 as I write this. Freedom of choice of platform is important, and thankfully we still have ways to keep using laptops and desktops for almost all of online things today, unlike so many apocalyptic predictions that the desktop "would die off in a few years" due to smartphone popularity. Besides, smartphones are a privacy and survaillance nightmare, and you're better off without them regardless.</p>
<p>Clearly, the state of affairs in the mobile world is not looking good for us, Free Software enthusiasts. However, I for one do not have any expectation of freedom in that platform, and I'm happy to avoid it as much as I can.</p>
<p>Do you think it is possible to achieve a good level of user-freedom and privacy in a mobile device nowadays? How would you do it? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Running graphical applications as root with doas</title>
        <link href="https://tilde.town/~kzimmermann/articles/graphical_applications_doas.html" />
        <updated>2023-11-06T18:08:32.530365Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Running graphical applications as root with doas</h1>
<p>For the longest of times in my <a href="/~kzimmermann/articles/first_starting_linux.html">history using Free Software</a>, I had been using the <code>sudo</code> command to elevate my normal user's privileges . I had learned early in my experience that using <code>root</code> for everything was a "dangerous" venture and we should only use it sparingly to perform specific admin tasks like updating the system and changing configuration files system-wide, and <code>sudo</code> had been the go-to tool for that for a good ten years of my Linux time.</p>
<p>This all changed one day when I was reading the <a href="https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.15.0#Move_from_sudo_to_doas">Alpine Linux release notes</a> and found out that the project chose to move away from <code>sudo</code> for the more modern and simplified <code>doas</code> tool, originally developed for the OpenBSD operating system. At the time, <a href="/~kzimmermann/articles/alpine_linux_desktop.html">my love for Alpine had already bloomed</a> and I was very excited about this. I even <a href="/~kzimmermann/updates/20230223_2210.html">wrote a brief guide</a> about setting some of the <code>sudo</code> behavior in <code>doas</code> so the transition becomes smooth.</p>
<p>Recently, however, I was surprised by one peculiarity of <code>doas</code> that I had not seen before: you cannot, by default, run X11 (graphical) applications as root through it. That was quite a disappointment to me, and frankly I thought it was a bad limitation of the command (perhaps a sacrifice to allow for its simplicity?). I even thought about installing <code>sudo</code> again, temporarily, but I found a way to work around it that was surprisingly little documented in the web. In this post, I'll share this with you so you may know as well.</p>
<h2>Why are you running GUI stuff as root? Isn't it dangerous?</h2>
<p>First of all, the big objection: sudo and doas were meant for administrators performing administrative tasks that are almost completely reserved for the command-line. Why am I running a <em>graphical</em> application as root, then? Aren't there other better ways to do what I want through the shell?</p>
<p>The answer is probably yes - but the problem is: I don't know how. My specific case for this was to run <a href="https://gparted.org/">GParted</a>, a graphical disk management utility, because I wanted to resize existing partitions in one of my backup disks. Expert sysadmins will probably scoff at my move and be able to type the commands blindfolded, but I specifically just can't do it from the shell. It's just so much easier and intuitive to visually see how the partitions are laid out and how you can grow/shrink each of them. And GParted is the perfect tool for it.</p>
<p>My case might be sound very specifc, but there are a few other specific tools that require some sort of root access to function properly, and the GUI is just the right interface for it. I'm thinking WireShark, Synaptic or even the file manager if it helps you better visualize something than the shell. In short: yes, you shouldn't be using root with random Xorg applications, but no, just because you shouldn't do something, it doesn't meant that shouldn't <em>be able to</em> do it if you needed to. </p>
<h2>The workaround for Xorg: xhost</h2>
<p>I narrowed down the cause of the <code>doas</code> errors as being because the user environment, containing X11 authentication variables, was not being carried over to root to execute it. Thus, root almighty ironically didn't have permission to open a window in the Xorg server, despite authenticating correctly!</p>
<p>I found this snippet of <code>doas.conf</code> configuration in the <a href="https://wiki.archlinux.org/title/Doas">Arch Wiki</a> that should've bridged that gap:</p>
<pre><code>permit setenv { XAUTHORITY LANG LC_ALL } :wheel # or your username instead of :wheel
</code></pre>
<p>This would carry on the most important variable of Xorg (<code>$XAUTHORITY</code> that points to your <code>.Xauthority</code> file in your home directory) and thus enable you to authenticate a-la <code>sudo</code>. Except that it didn't work for me. The errors just kept carrying through. So what was I to do?</p>
<p>Deep searching this led to a post in some Linux subreddit where the person had the same problem and the exact not-working configuration with <code>doas</code>. The suggestion that worked? Use <a href="https://wiki.archlinux.org/title/Xhost">xhost</a>, the command that allows you to configure access to Xorg. Essentially, it seems that <code>xhost</code> maintains some access control list much like SELinux etc but only concerning the X server's session. Thus, you can temporarily "lift" all access controls to it by issuing as a normal user:</p>
<pre><code>$ xhost +
</code></pre>
<p>And then anyone can run applications in the X session, including root via <code>doas gparted</code>. The problem is that this is opens a gaping hole in your computer, especially if you have multiple users or have X listening on the network. You definitely <em>don't</em> want that. So you can limit the attack surface by being a little specific about the request:</p>
<pre><code>$ xhost +IS:localuser:your_username
</code></pre>
<p>This grants permission only to your user, and in a local scope (i.e. not via the network). And you can then bring the system back to the original state by issuing:</p>
<pre><code>$ xhost - # go back to original ACL.
</code></pre>
<p>This will close any concerns, but you might forget to do that in the middle of the tasks you're doing. After all, who knows how much time you'll need to fix your admin tasks? This is why I took this made a tiny "wrapper" script that will automatically close the gap in the end. It's simple, and looks like this:</p>
<pre><code>#!/bin/bash
#
# gksudo: Shorthand to run graphical applications as root via doas under X11.
#
# It doesn't really use gksudo-the-program for this, instead wraps `doas` with
# commands to temporarily lift the X11 permission issues as it gets executed.
# I'm not sure if this is insecure or not, but seems like an OK compromise.
#

if [ $# == 0 ]
then
    echo "USAGE: gksudo [PROGRAM] [ARGS]"
    echo "Runs graphical PROGRAM as root, using doas"
    exit 1
fi

# Allow root to run graphical applications
xhost +SI:localuser:root &gt; /dev/null

# run the application
doas $@

# Return to default config
xhost - &gt; /dev/null
</code></pre>
<p>I named it <code>gksudo</code> to honor a long gone, but useful wrapper that was used to elevate privileges in graphical applications in Ubuntu. It's no longer used, but the name stuck with me, and this is exact the same use-case as it.</p>
<p>Save this script somewhere in your <code>$PATH</code>, make it executable and run as <code>gksudo command</code>. You'll be prompted for your password via <code>doas</code> as a backend. Defaults are restored in the end to cover your security.</p>
<h2>Security and other concerns</h2>
<p>Is this perfect? Not sure. If anything, Xorg by design isn't the most secure thing in the end, but I still use it (and so do many others). Lifting another layer of security out of it shouldn't be too bad as long as its use is restricted and temporary.</p>
<p>By all means do read the two Arch Wiki pages I linked above, especially the one about <code>xhost</code>, and decide for yourself whether this hack is usable or not, and please let me know if you think this is an issue and I should fix it. Less permissions? Closing the xhost permissions as soon as the application is launched?</p>
<p>One last thing that I must add on this that might not be too obvious: I run Xorg as a normal user, <em>not</em> as root. Translation: instead of using a display manager, <a href="/~kzimmermann/articles/graphical_desktop_alpine_3.14.html">I just run <code>startx</code> from the console</a> straight into my window manager. I'm not sure how this affects the usability of this hack for those who use a display manager, but this works for my case specifically. And besides, I've heard that running Xorg through a display manager is also not very secure.</p>
<hr />
<p>Do you ever run graphical applications as root via <code>doas</code> in your system? How do you do it? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #46 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Fixing the Desktop on Alpine Linux 3.14</title>
        <link href="https://tilde.town/~kzimmermann/articles/graphical_desktop_alpine_3.14.html" />
        <updated>2021-06-23T13:25:54.159810Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Fixing the Desktop on Alpine Linux 3.14</h1>
<p>A little late to the party, perhaps, but Alpine Linux - my new favorite distro that <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">I wrote about</a> a few weeks ago - has just <a href="https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0">released a new version (3.14) on June 15</a>.</p>
<p>Congratulations for the whole team on this achievement! Truth is that after using Arch for such a long time, I sort of forgot what point releases were, and was delighted to see the announcement on the website as I was going to fetch the latest ISO to try on my other machine. <em>Side note</em>: who knows if this trend will mark me <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">distro-hopping</a> once again towards Alpine?</p>
<p>The installation went smoothly and I was up and running barebones Alpine in the other machine in a matter of 10 minutes total <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">just as I wrote in my previous guide</a>, but from then and on a few problems unseen before started arising. Chiefly, the Graphical Environment was <em>completely frozen</em> and I was unable to do anything after I started Xorg, except press the power button and reboot the machine. I wasn't even able to <code>Ctrl+Alt+F2</code> and salvage the session from another TTY, or try the funny-sounding but golden trick of <a href="https://en.wikipedia.org/wiki/Raising_Skinny_Elephants_Is_Utterly_Boring">Raising Skinny Elephants</a>. Both the mouse and keyboard of the laptop - peripheral or built-in - were dead, completely unresponsive.</p>
<p>Bummer, I thought. Reading a little more into the detailed <a href="https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0">Release Notes in the Wiki</a>, though, I realized the apparent source of this strange situation: <a href="https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0#Xorg">updates to Xorg</a> made by the maintainers.</p>
<blockquote>
<p>xf86-input-{mouse,keyboard} have been removed, as upstream maintainers have explicitly declared that they are for non-Linux platforms only. Users should have already switched to xf86-input-evdev or xf86-input-libinput.</p>
<p>/usr/libexec/Xorg.wrap and the suid bit on /usr/bin/Xorg have been removed. X now requires udev or mdev, and either elogind must be enabled or X users must be in the video and input groups.</p>
</blockquote>
<p>Ok, there you have it. Apparently, the quickstart script <code>setup-xorg-base</code> which previously pretty much set everything up perfectly now does not automatically fulfill all these requirements anymore. After spending a few minutes bumping my head against the wall to get this sorted, here's what eventually fixed it for me and allowed me to proceed with a nice Desktop session. Here's how:</p>
<h2>The fix</h2>
<p>As hinted by the release notes, additional packages concerning the graphical environment must be used. Install <code>evdev</code> to make everything work again:</p>
<pre><code>apk add xf86-input-evdev
</code></pre>
<p>This should be the bare minimum in terms of packages, but just by doing so the Desktop will still not work: there is still the need to make some manual inclusions on system groups that handle the sort of input and interaction in Xorg. Hence:</p>
<pre><code>adduser youruser input
adduser youruser video
adduser youruser audio # for good measure
</code></pre>
<p>Upon having this done, <code>startx</code> should now work pretty seamlessly and you should be able to launch and use your window manager without hassle. Mission complete!</p>
<p>Or almost. One more thing: now that you're back on your way to finish ricing your Alpine install, here's a thing that I missed on the original essay that also make for a better desktop session with it: install the package <code>desktop-file-utils</code> in order to be able to automagically open files on your file manager by double-clicking them!</p>
<p>This was something that I accidentally discovered while installing xarchiver to unzip some files directly on the file manager, and saw that as it pulled out that dependency, other installed graphical applications also were included in the right-click menu. Awesome, I guess. Didn't notice that before.</p>
<hr />
<p>Alpine was already one of my favorite distros, and now we've got a new version out. Awesome. What are your thoughts in using Alpine on the Desktop? Do you think it's silly, a waste of resources on a distro meant for server-level stuff  or that we need to explore more how far we can take it? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #20 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Oh yes, baby: 1/5th done! Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>"If you're not paying, you're the product" goes literal</title>
        <link href="https://tilde.town/~kzimmermann/articles/if_youre_not_paying_youre_the_product.html" />
        <updated>2021-03-19T10:03:56.203743Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>"If you're not paying, you're the product" goes literal</h1>
<p><em>"If you're not paying, you're the product!"</em></p>
<p>A truthful, but often tired saying that is well-known among privacy advocates and mindlessly dismissed by the consumer class with a "yeah, yeah, yeah, whatever..." </p>
<p>It's not "literally," they say, that this saying works, though. You're not <em>literally</em> the product being sold around or traded for money, they say. And while there might be some truth to this argument, it looks like India and a few other countries are taking the first steps towards making sure that quote safely jumps from metaphor to reality:</p>
<p><a href="https://restofworld.org/2021/loans-that-hijack-your-phone-are-coming-to-india/">Loans that hijack your phone are coming to India</a></p>
<p>Granted, this is not the first of its kind either - since last year, at least <a href="https://tilde.town/~kzimmermann/articles/google_app_ransomware.html">Google also deployed this sort of legal ransomware</a> to the market. But as the article details, we can see that their tactics are getting more and more personal every day.</p>
<p>Extra creepy credit for this quote by the guy who did this, which essentially explains why do we still allow this thing to continue regardless:</p>
<blockquote>
<p>“We did some research and figured out, You know what? <strong>People really still want to buy a phone</strong>,” Juriasingani told Rest of World, explaining the genesis of his company.</p>
</blockquote>
<p>So there you have it. In the eyes of the people: "Cash or credit?" "Privacy." Don't fall for this sort of trap when it comes to your corner of the world. <a href="https://tilde.town/~kzimmermann/articles/im_a_free_man.html">Fight the dependence on technology that doesn't respect you</a>, find alternatives and remain <strong>free</strong>.</p>
<hr />
<p>This post is number #7 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Good news: I'm a (temporarily) Free Man!</title>
        <link href="https://tilde.town/~kzimmermann/articles/im_a_free_man.html" />
        <updated>2020-12-02T10:24:16.714134Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Good news: I'm a (temporarily) Free Man!</h1>
<p>Yesterday, a simultaneously very good and very bad thing happened: <em>my android phone broke</em>.</p>
<p>What happened is that I locked it in airplane mode before sleeping, and when I woke up, the password prompt to unlock the device would not show the keyboard to type the password. I tried it many times, but the soft keyboard would simply not come up, and even with an external keyboard via USB OTG it still does not type my password. </p>
<p>The device was in Airplane mode before locking so there's no way I can connect a bluetooth keyboard to test. And since I can't even reboot the phone or turn on bluetooth without putting my password, there is literally <em>nothing</em> I can use it for right now - <strong>it's functionally bricked</strong>.</p>
<p>I spent the better part of this morning metaphorically banging ny head on the wall with respect to this, searching frantically for a solution and am this close to give it up altogether on the otherwise perfectly functioning device. However, where some people would rage loudly and throw the phone out of the window, I came to realize something: you know what? I'm not screwed: <em>I'm free!</em></p>
<p>I was freed (the hard way!) from pernicious tracking by this device that I cannot ultimately trust for protecting my data (thanks, Airplane mode!), that <a href="google_app_ransomware.html">cannot be made fully free</a> even with so many global efforts trying so. I am free from being tracked every minute that I send or receive data, even in my sleep. And I'm free from having to live a life where I must be 100% of the time connected in order to be socially accepted - my laptop can be put to sleep when I want.</p>
<p>"But how are you going to communicate with the other people?" you may ask. Why, the same way that we've been doing years before cell phones. At home, I can talk. I can use email. I have a landline at work that I can use to call people if needed. <a href="messaging.html">IRC and XMPP</a> provide a good way to do real time communication and have even browser-based clients. I can use someone else's phone when I'm out and need to call someone. The possibilities are endless.</p>
<p>Statistically, I'm most of the time either at work or home, and internet is still available at any of these places. I don't need something that wants to know everything about me sitting within 5m of me on a 24/7 basis to live my life.</p>
<p>There are evils in life that may come for good, and I just now realized this is one of these cases. Rejoice, my friends, for I'm now a truly free man once again - or at least I get to repair this issue with my phone.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Installing FreeBSD 14.0 on a USB drive</title>
        <link href="https://tilde.town/~kzimmermann/articles/installing_freebsd_usb_drive.html" />
        <updated>2023-12-18T15:31:26.222632Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Installing FreeBSD 14.0 on a USB drive</h1>
<p><img alt="A FreeBSD USB stick" src="/~kzimmermann/images/freebsd_disk.jpg" /></p>
<p>Having re-discovered my <a href="/~kzimmermann/articles/freebsd_desktop_part_2.html">love for FreeBSD on the desktop</a> for the past month or so, I embarked in yet another adventure with it: creating a portable installation of it a USB drive so I could carry it with me on the go. This would be a great addition to my everyday carry, and would also again put the OS in test against many situations I have not had faced yet with it.</p>
<p>I have done portable installs of many-a-Linux distributions in the past, ranging from the classic designed for portable, run-from-RAM distros like <a href="https://diode.zone/w/avwsnufzkkYBXtiKA85NPZ">Puppy Linux</a>, until the more recent "frugal installs" of <a href="/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> directly on the drive. However, none of them ever counted on a BSD, so this was going to be a new experience, even though I thought it shouldn't be too difficult. After all, I've already mastered <a href="https://diode.zone/w/8BbXkQsn5XPsr92BPfwszF">the art of FreeBSD installation</a>, and this is just another install medium, right?</p>
<p>While that was certainly true (and the installation was indeed smooth as butter) the problem happened <em>after</em> the install. Once the installer gave me the green light to wrap it all up and reboot, I happily did, yanking out the installer drive from the port, and leaving the one with the frugal install plugged. </p>
<p>Case in point, I immediately faced an error the likes of which I have never heard of, and at a very early stage of boot nonetheless (a scary, scary prospective!). The bootloader splash appeared alright, but shortly afterwards it would fail. The message in question was something like this:</p>
<pre><code>...
Trying to mount root from ufs: /dev/da1p2 [rw]
mountroot: waiting for device /dev/da1p2
Mounting from ufs: /dev/da1p2 failed with error 19.

Loader variables:
vfs.root.mountfrom=ufs:/dev/da1p2
vfs.root.mountfrom.options=rw
...
</code></pre>
<p>This was intriguing. I've never had a failure happen so early in the boot process on a fresh system, and yet, this was not a catastrophic kernel crash or something like that. One interesting thing was that by pressing <code>?</code>, I was able to see a few devices listed that looked familiar:</p>
<pre><code>&lt;snip&gt; da0p1 da0p2 da0p3 ... &lt;snip&gt;
</code></pre>
<p>Wait a second, I thought. Aren't these devices respectively mapped to my USB drive's partitions? I decided to try mounting the root partition:</p>
<pre><code>&gt; ufs:/dev/da0p2 rw
</code></pre>
<p>And it seemed to have worked for a while, with the process continuing for a second or two, until a similar wall was hit, and I would be dropped again to a recovery shell.</p>
<p>Unable to proceed after this, I went ahead and searched for the problem online. The closest I could find for this message was <a href="https://forums.freebsd.org/threads/mounting-from-ufs-dev-ad0s1a-failed-with-error-19.57135/">this forum post</a>, but the people trying to troubleshoot the solution seemed to be firing in all directions without a clear strategy.</p>
<p>I was about to give up when I read <a href="https://forums.freebsd.org/threads/mounting-from-ufs-dev-ad0s1a-failed-with-error-19.57135/post-439424">this second to last reply</a>, which was solid gold:</p>
<blockquote>
<p>You had that problem for reason probably you removed the installation USB drive after installation and cause after reboot your <strong>main USB drive shifted down</strong> from /dev/da1... to /dev/da0...</p>
</blockquote>
<p>Bingo: the reason why this happened was that my USB device count was one unit higher during the install, since I had USB drive 0 as the install medium and USB drive 1 as the "disk." <em>This</em> is what got written to <code>fstab</code> and all. Then, when I'm booting from the only USB around, the "disk" becomes USB 0, and <code>fstab</code> is completely lost.</p>
<p><strong>And so, the solution:</strong> boot from a live medium, or mount the USB drive you've installed on another computer, then edit that disk's <code>/etc/fstab</code> file to change:</p>
<pre><code>... /dev/ada1p2 ...
</code></pre>
<p>To</p>
<pre><code>... /dev/ada0p2 ...
</code></pre>
<p>And so on throuh any other references you might have in that file. Save it, close and reboot normally with the frugal media. FreeBSD should start again, normally this time, and voilà: you're ready to rock with the Daemon to go <code>\,,/</code></p>
<h2>Conclusion</h2>
<p>After a certain shenanigan of counting USB disk indices, FreeBSD 14.0-RELEASE works very well as a to-go OS for me - on par with my already existing bug-out USB sticks containing a portable <a href="/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> install.</p>
<p>In hindsight, however, it was really quite weird that this problem even happened in first place. I've been installing Linux distributions into removable media for a few years now, and it hasn't ever had such problem. I wonder if this happens because the FreeBSD folks don't think of their OS as worthy of portability (similar to a <a href="/~kzimmermann/articles/fixing_gajim_freebsd.html">shenanigan with Gajim</a> earlier this year)? At any rate, I'm glad it's easy to fix, and in fact you can do it straight from the installer - just choose LiveCD at the end of the installation and perform the aforementioned substitution in the <code>fstab</code> file of the chrooted environment.</p>
<p>Since learning this trick, I've prepared two removable drives with FreeBSD for me to test-drive; a plain flash drive and an external SSD. I'm already enamoring the idea of changing my daily driver machine to FreeBSD from Debian, and perhaps will do a little more testing with this drive before taking the plunge.</p>
<hr />
<p>Have you ever had FreeBSD installed in an external medium for "everyday carry" or rescue purposes? What do you think of it? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post is number #48 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Things to check when buying a laptop </title>
        <link href="https://tilde.town/~kzimmermann/articles/laptop_buying_tips.html" />
        <updated>2021-03-22T07:17:19.557286Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Things to check when buying a laptop</h1>
<p>As of 2021, I've been solely using laptops as my personal computers for about 20 years, and have never personally owned a desktop after I got my first one.</p>
<p>Ok, this is not exactly a "braggable" accomplishment in a world where laptop popularity is still high despite the decade-long yet-unfulfilled promise of "tablets and smartphones <a href="https://readwrite.com/2011/09/12/tablets_smartphones_killing_pcs_2015/">killing the PC</a>." In these laptop years of mine, however, I've owned or worked with more than a dozen different machines of different makers and generations, which gave me something of an experience about which designs work better, what traits are more desirable, and other items that I would look for in my next laptop.</p>
<p>In this post, I'll share what from my experience are the things I think you should be looking for when purchasing (or <a href="/~kzimmermann/articles/dumpster_diving_hacker.html">otherwise acquiring it</a>) your next laptop, if that's your preferred way of computing.</p>
<h2>How I use my computer</h2>
<p>First, let's settle some things: I have my own style of working with computers, and it might be <em>radically</em> different than yours. Therefore, I'll explain what's mine first and you can account for the differences when reading the rest of the post. Alright? </p>
<p>Cool. Let's start with perhaps the most defining point:</p>
<h3>I use my Laptop as a Desktop</h3>
<p><em>Cue in the irony.</em></p>
<p>Yes, folks, truth is even after ditching heavy and immovable Desktops for more than 20 years, I still almost always use my laptop pretty much like a desktop computer. This is probably because I used desktops most of my childhood, and my parents probably bought laptops because they made moving from places easier. Hence, when I use my laptop, I:</p>
<ul>
<li>Always use a laptop stand, external keyboard and mouse (unless I'm out of the house).</li>
<li>Also use an external monitor if available.</li>
<li>Remove the battery and use it with AC only.</li>
<li>Use an ethernet cable for network connections, if available.</li>
<li>Almost never use the touchpad or the laptop's keyboard.</li>
<li>Keep it on for very long periods of time (but suspend it when going to bed).</li>
</ul>
<p>This might be very different from folks who primarily enjoy the mobility freedom that having a laptop employs, and use it while lying in bed, catching the wifi of a coffee shop, in transit or travelling, etc. So when doing your assessment against these guidelines, please keep in mind my desktop-oriented biases. </p>
<h3>I use Linux on my Laptop</h3>
<p>Despite my <a href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_linux_user.html">recent adventures in trying out FreeBSD</a> and having used only Windows for work, on my personal machines GNU/Linux is still king. All of my computing needs are tailored specifically for the requirements of lightweight Linux distributions.</p>
<p>This is a good thing overall as it keeps resource requirements quite low, but depending on your operating system of choice, those might be much higher in comparison (even among other Linux distributions).</p>
<h3>I don't game or do multimedia work on my Laptop</h3>
<p>As a casual (read: old) gamer with no demand for video editing software, graphical needs are moot for me. In fact, I'd say I'm a GUI-averse user who prefers to <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">do most of the work in the terminals</a>. If you do graphically-intense work or gaming, you may need to do a double take on my points.</p>
<p>Now that my style of computing is explained, let's go on to the tips.</p>
<h2>Build considerations</h2>
<p>There are some aspects of the actual physical build of the computer (finishing touches, physical arrangement of keyboard, lid, ports etc) that impact the usability of the machine far more than it looks. I'll point these out here.</p>
<h3>Display and lid</h3>
<p>Standard LCD displays are preferred over those "glassy" and shiny ones (e.g. "matte" finish of Apple's Macbooks) due to their clearer and brighter screens and opaqueness that prevents reflections from interfering with your work in outdoor places or rooms with strong lights. Glass is prettier, yes, but functional is better, especially in dark interfaces like the terminal.</p>
<p>The range of the laptop lid in itself is also important: can the lid open up to almost 180 degrees? Are the hinges too big, limiting how wide can the lid open? </p>
<figure>
    <img src="/~kzimmermann/images/laptop_angle.jpg" />
    <figcaption>This other laptop can open the lid to almost 180 degrees. You could angle it very steeply (as is the case of some stands) and it would still be perfectly readable..</figcaption>
</figure>

<p>Most simple laptop stands raise the laptop display by simply angling the body up, which means that in order to work with it properly, the lid must also open up further to compensate the angle. I always value a greater lid range in a laptop, even if this might make the hinges a little less sturdy.</p>
<p>I prefer lids that can be closed and opened without a locking mechanism, and that can be opened with a single hand.</p>
<h3>Location of ports and interfaces</h3>
<p>As a direct consequence of using a stand, the location of the laptop's numerous ports and interfaces also matters. Most stands rest the laptop's weight on the front, thus blocking access to any ports on that side of the machine. If you have any 3.5mm audio jacks or SD card readers along that line, you can't use them while the laptop is mounted.</p>
<figure>
    <img src="/~kzimmermann/images/blocking_stand.jpg" />
    <figcaption>This type of stand blocks the front side of the laptop completely</figcaption>
</figure>

<p>Even if the design doesn't block them directly, having the jack angled means that connectors or cables will be bent to reach it, which will eventually break them.</p>
<figure>
    <img src="/~kzimmermann/images/laptop_front.jpg" />
    <figcaption>This laptop has the audio and mic jacks as well as the SD card reader facing the front. When you angle up the laptop, they either get blocked, or bend the connectors.</figcaption>
</figure>

<p>Placing things in the back like some Thinkpads do isn't a very good idea either. When you raise the laptop, the cables also curve due to gravity, adding strain to the connectors. In the long run, this can damage them.</p>
<figure>
    <img src="/~kzimmermann/images/laptop_back.jpg" />
    <figcaption>This T430 has most of the ports and utilities coming from the back. When you put it on the stand, the cables and connectors bend due to gravity, and are at risk of long-term damage</figcaption>
</figure>

<p>Thankfully, the great majority of laptops have their ports and jacks on the sides nowadays, so this is almost never a showstopper. Yet it's still worth checking before buying, so that you know what to expect.</p>
<p>The number of ports is significant, too. "Ultraportable" laptops with only one USB port are a no-go (no peripherals in case of USB booting). Ideally, there should be four USB ports. I prefer a (full-sized) HDMI port over VGA or DVI, though none of these are showstoppers.</p>
<h3>Power adapter and battery</h3>
<p>I value a laptop with a <em>removable battery</em> quite a lot, perhaps enough to acquire an older model over a newer one from this point alone.</p>
<p>Besides the security considerations of having a laptop that <a href="https://security.stackexchange.com/questions/12740/can-a-powered-down-cell-phone-be-turned-on-remotely">can never truly be turned off</a>, an embedded battery is bad for my use case where the computer sits on for long uninterrupted periods of time like a desktop. </p>
<p>The internet itself seems to be divided on the simple question of is it better to <a href="https://answers.yahoo.com/question/index?qid=20090223173034AArpkBL">always leave the battery in even if charged</a> or to <a href="https://www.makeuseof.com/tag/leave-laptop-plugged-time/">unplug and plug the laptop as the charge varies</a>. To avoid that uncomfortable question completely, it's better to be able to pull the battery out and just live on AC.</p>
<p>A power adapter with an L-shaped DC connector is much sturdier than a straight one. A removable AC cable means that you can swap your 8-cable with another country's standard without having to use adapters, unlike an adapter where the transformer is embedded with the plug (like a smartphone charger). </p>
<p>Laptops whose power adapters are very thin are undesirable in my book. They break more easily and usually carry less power, a sign that the computer in question has limited capacity. Fat, 20V DC adapters are a safe bet.</p>
<p>Sadly, the trend points to more unremovable batteries in the future in the name of a smaller size, but some business computer lines (hello, Thinkpad!) still seem to stick to the "old" ways. And speaking of the Thinkpad...</p>
<h3>Access to memory and storage</h3>
<p>A modular and easily upgradeable laptop is the best way to future-proof it against <a href="https://en.wikipedia.org/wiki/Planned_obsolescence">planned obsolescence</a>. Unlike desktops, most laptops have onboard components and other embedded items that cannot easily be replaced or upgraded. Those that can, though, earn big points in my book - <em>wink</em> Thinkpad. And to me, the two most important components are RAM and storage.</p>
<p>Avoid EMMC embedded storage - choose real drives only. eMMC might look cheap, fast, friendly for mobility, but if you have a detachable hard drive (or SSD), when it fails, you can replace the drive. For embedded devices, you have to replace the computer, like an iPhone. Don't treat your laptop as a smartphone.</p>
<p>From my experience, you can easily future-proof a current laptop or even bring an old one back to life by simply giving it a RAM upgrade. Most "bloat" that is found in software has to do with RAM usage (and to a lesser degree, <a href="https://tilde.town/~kzimmermann/articles/digital_minimalism.html">files</a> as well), so I find that older CPUs can survive a surprising long time. </p>
<p>As a matter of fact, my fifteen year-old 2006 Dell laptop remains perfectly usable today with Linux, though its 32bit memory limits might mean its end is coming this decade. Keep in mind that I don't play games or do anything graphical in depth.</p>
<h2>Resources and hardware</h2>
<p>Thanks to Linux and the aforementioned upgradability, the amount of memory, CPU capacity or storage does not have to be very large to begin with. </p>
<p>In 2021, if I were to buy a laptop my minimum specs would be 4GB RAM and a Core i5 CPU. This can go much lower, as for example a Raspberry Pi or a netbook, but only for cases where I would <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">receive a computer through other non-monetary means</a>.</p>
<p>Ideally, the hardware components should be as compatible with free Linux kernel drivers as possible, so Broadcom wifi is cringe. There are compatibility lists for some models like <a href="https://h-node.org/">H-Node</a> that can help you if you can take the time to search beforehand. I've found that laptops with Intel chipsets tend to perform very well in terms of compatibility with Linux. Not to start a flame war, my experience with AMD chipsets was exactly the opposite (though I hear this is very different in the desktop world).</p>
<h2>Other considerations</h2>
<p>Choose <em>Business</em> Laptops over home use / personal laptops. Not only the build quality is much better, they usually have parts and spares available on the market if a component breaks, either by the manufacturer or on sites like eBay. Thinkpads are probably kings on this category, but I also had a lot of success with Dell (yeah, go figure). Plus, more and more home-use laptops nowadays come with silly shit like re-mapped F-keys that change the default behavior to things like backlight control rather than using the Fn key combination.</p>
<p>Macbooks are great hardware with <em>meh</em> software. Linux will run great on these, but usually they're a hard bargain even when getting used (on the other hand, "resale value dude..."). In my opinion they're not worth the hassle unless you can find a real gem of a low-priced one. </p>
<p>If you really want one, here's my suggestion: find a family member or close friend whose Macbook "broke" because the hard drive failed or is inexplicably "slow." Depending on his/her computer literacy and closeness to you, you might even get it for <em>free</em>.</p>
<h2>Conclusion</h2>
<p>These are my criteria for the selection of the best laptop possible for my specific use case. I haven't found yet the perfect laptop (perhaps <em>because</em> of this strict criteria!) but this sheds some good light for when I choose my next one.</p>
<p>What are the criteria that you use when choosing a laptop? How do you use your laptop - like me or in a more mobile way? Let me know your comments in my <a href="https://fosstodon.org/@kzimmermann">Mastodon</a> account!</p>
<hr />
<p>This post is number #8 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>A Linux veteran tries out FreeBSD for the first time</title>
        <link href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_linux_user.html" />
        <updated>2021-02-26T11:44:48.537217Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>A Linux veteran tries out FreeBSD for the first time</h1>
<p>You might have been seeing recently a string of toots from <a href="https://fosstodon.org/@kzimmermann">my Mastodon account</a> that outlined the beginning of my adventures with <a href="https://freebsd.org">FreeBSD</a>.</p>
<p>I had fiddled with the idea of trying something other than Linux for a while, starting at least back in 2012-ish, when I first burned a live medium of <a href="https://www.ghostbsd.org">GhostBSD</a> and tried it out for a few hours on my laptop, before deciding I liked Linux better. With the growing experience I got from using Linux every day, however, my interest was renewed and I finally decided to try it again in 2021.</p>
<p>I've been using GNU/Linux as my only OS for more than ten years now, and I have changed distributions a couple of times during this adventure, starting out with Ubuntu, trying out many other Debian-based mini-distros, switching seriously to Debian stable and, more recently, <a href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html">Artix Linux</a>. From this standpoint, I thought I'd be prepared to try my hand at FreeBSD.</p>
<p>In the realm of learning an operating system, though, the number of years using something turns out not to be so much of a credential. For example, most people have been using Windows for 20 or even 30 years, and still don't know how the innards of Windows work, or how to salvage it if something bad breaks. That's because all they did on it was run Microsoft Office, browse the internet or play games.</p>
<p>I'm still not sure if my knowledge qualifies as "advanced" in terms of Linux, but after doing all my computer-related activities <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">inside of the text-only Linux console for a week</a>, I felt I was a little more prepared to dive head in into a completely new OS. This post is my summary of what I have learned so far, and my thoughts on how FreeBSD feels coming from a Linux background.</p>
<h2>My goals in this exercise</h2>
<p>To clarify any questions that may be hanging in your head, <em>I'm not planning to replace Linux with FreeBSD anytime soon</em>. That's because to me there is neither competition or mutually exclusion along GNU/Linux and FreeBSD - each one can fulfill their own specific duties with their own sets of strengths and weaknesses. </p>
<p>I wanted primarily to expand my horizons of knowledge of how Unix-like OSes work, and to "get my hands dirty" concerning the creation of a tailored distribution from bottom up on my own way. I could have chosen Gentoo or other minimalist Linux distributions (<a href="http://www.linuxfromscratch.org/">LFS</a>, perhaps, someday?) for the same exercise, but BSD seemed to be the best fit for the challenge - not too big, not too small.</p>
<p>I did not have many expectations about this venture to begin with, since I truly had no idea about what to expect from a barebones FreeBSD system. This low expectation in the long run benefitted me because it protected me from some pretty bad falls along the way from an operating system that sometimes you read about as being "better" or "much saner" than Linux. This, combined with me having no intention of replacing my current OS, gave me a free mindset to explore ahead.</p>
<p>Also, I must add that this project would not have gone so far had two things not happened in recent times. First, I now have a spare machine that was <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">procured from the trash</a> earlier on. This gave me the freedom to really tinker around, try, break things and restart afresh that would not have been possible if that machine also happened to <a href="https://tilde.town/~kzimmermann/articles/project_128.html">house all my data</a>. And second, just recently recovered a real gem, a Raspberry Pi B (released in 2012), again from the depths of the trash. I decided to also try to install FreeBSD there, since they advertised a specific image for it, and suppposed was lightweight.</p>
<p><img alt="My trash-recovered Raspberry Pi B running FreeBSD 12.2" src="https://fosstodon.b-cdn.net/media_attachments/files/105/769/092/637/443/718/original/dfb46ac4735d5bb3.png" /></p>
<p>Couldn't I have done it in a Virtual Machine instead? Probably yes, but getting VirtualBox to work under Artix has proved to be sort of a challenge, and apparently there are some known issues when trying to run a FreeBSD guest inside VirtualBox <a href="https://invidious.snopyta.org/watch?v=ZzjJwgq8mjo">according to this video</a>. </p>
<h2>Installing FreeBSD: who's afraid of a text installer?</h2>
<p>Ok, I've made up my mind; let's install the beast(ie)! Where do I start?</p>
<p>The FreeBSD project makes the <a href="https://download.freebsd.org/ftp/releases/">release snapshots</a> available for the public in their own webpage, but I couldn't find any torrents there. There are quite a lot of options that you can choose from since they support a wide range of processor architectures, just like when you choose a Debian image to install. No problem for me, as I already knew what I was looking for, but a novice might find it confusing.</p>
<p>There was one point that I did confuse me, though: they have different <code>.iso</code> and <code>.img</code> files intended for different installation media. I grew used to most Linux distributions using hybrid ISOs that work for both USB sticks and optical media that I just grabbed the first ISO for my machine, burned it to my stick and promptly resulted in an unbootable environment. Lesson quickly learned: <code>.img</code> is for USB sticks in FreeBSD. That's no biggie.</p>
<p>A bigger problem, however, was that as soon as the installer booted, I was faced with a garbled screen that could not be read in any way:</p>
<p><img alt="My problem using the FreeBSD installer" src="https://i.imgur.com/WUF98pb.png" /></p>
<p>This turned out to be a problem of mismatching resolution between the framebuffer and the screen itself, and after a chat in the FreeBSD IRC I had the solution. To fix it I had to go to the boot options (pressing Esc) and run <code>gop set 0</code> to make use of the largest screen resolution available. The installer then proceeded smoothly.</p>
<p>The FreeBSD installer is completely text-based, using TUI "dialog boxes" to guide you throughout the installation in a manner similar to how Debian does it, and it was virtually no different than using a graphical installer. It was so complete that it even allowed me to set up full disk encryption right on the spot, something that is not so common along the lesser-known Linux distributions.</p>
<p>In parallel, partly due to my frustration with the aforementioned screen problem, I installed FreeBSD in my recently-found Raspberry Pi B as well. This was a much quicker and easier route (the website kindly names exactly what is the file you need to download and burn), albeit you end up with a preconfigured system rather than a fully "from zero" one. But that was enough for me to get started in FreeBSD while I couldn't solve the problem on my laptop.</p>
<h2>Using FreeBSD: same same, but different</h2>
<p>Upon installation, FreeBSD feels familiar to my Linux console-only environment, but there are still some differences that still feel a little weird to me.</p>
<p><img alt="A meme that goes ~$ _ -&gt; :D | ~% _ -&gt; :O" src="https://fosstodon.b-cdn.net/media_attachments/files/105/786/442/877/673/977/original/f471389e89f3fd7b.jpg" /></p>
<h3>Software management</h3>
<p>The base system is pretty raw and basic, similar to a fresh installation of Arch Linux, so my first task was to install the stuff that I needed to get the ball rolling and do some work in my FreeBSD machine. Lack of <code>apt</code> or <code>pacman</code> made me a little lost at first, but then I quickly realized that FreeBSD complementary makes use of a relatively recent binary package manager called <code>pkg</code>, which brought me back to familiar grounds. <code>pkg search &lt;something&gt;</code> and <code>pkg install program</code> were all I needed to make FreeBSD more at home to me.</p>
<p>I tried my hand at the ports system that FreeBSD uses to install software from the source code, but only because I wanted to learn how it worked. Turns out that the process is surprisingly similar to how you <a href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html">install software via the Arch User Repository</a>. </p>
<p>You have a "recipe" (<code>Makefile</code> in FreeBSD, <code>PKGBUILD</code> on Arch), some available files, some metadata and (FreeBSD only) a patching script that will take the generic source code and patch it to make it fully compatible with FreeBSD before the actual compilation begins. To install something via ports, then, it's as simple as:</p>
<pre><code>cd /usr/ports/your/applications/directory; make install &amp;&amp; make clean
</code></pre>
<p>An advantage of ports is that whereas in the AUR you have to browse the online repository and clone the repository containing <code>PKGBUILD</code> manually, FreeBSD <em>already</em> contains the <em>whole</em> ports collection on disk. This makes it easier to update all your ports-installed software by issuing one command (<code>portsnap fetch</code>) instead of manually checking or depending on AUR helpers, but takes considerably more time to conclude (the <code>/usr/ports</code> directory is about 1GB here).</p>
<p>A drawback is that searching for software within the <code>/usr/ports</code> can be hard or confusing. Running <code>find /usr/ports -name "*package*"</code> can take quite a long time, or not return results at all, whereas in Arch this is a quick web search.</p>
<h3>Configuration and management</h3>
<p>A quite memorable mishap is that my laptop ran into an issue with the infamous PC speaker beep (why do computers still even have them, anyway?). Every time I pressed Tab for completion, a striking beep would cut the silence, and I almost gave up on the whole project due to this. Fortunately, good sense triumphed and it turns out that you can solve this by adding:</p>
<pre><code>allscreens_kbdflags="-b quiet.off"
</code></pre>
<p>To FreeBSD's <code>/etc/rc.conf</code> file.</p>
<p>Speaking of it, this <code>rc.conf</code> file is quite useful, and pretty much one of the only things you need to set up a good working environment in the console (called <code>vt</code> in FreeBSD). This is where you set up your fonts and any other console display settings, applying to all vts if you need. For example, this is how I set my font of choice:</p>
<pre><code>allscreens_flags="-f /usr/share/vt/fonts/ter-u20n.fnt"
</code></pre>
<p>This is also an example of one of FreeBSD's "selling points" in comparison to Linux: the configuration seems much more standardized and organized than in Linux so far. That <code>rc.conf</code> is where I configure everything that I need to do in the console. Programs and their associated files are neatly organized according to their prefixes and priorities (<code>/bin/</code> and <code>/etc/</code> are concerned only with boot-critical programs), whereas in Linux you may have to do some hunting to find out where some config file is.</p>
<p>It sounded strange to me that to configure something like bash I have to browse to <code>/usr/local/etc</code>, but that's really the logical way. I'm not saying that this immediately makes FreeBSD better, but it's way more logical.</p>
<h3>General usage</h3>
<p>Down in the details, sometimes the same programs I use in Linux behave different in FreeBSD. I have no idea why. Most notably, <code>tmux</code>'s clipboard behaves differently when it comes to highlighting text to copy, with highlighting being triggered by only pressing space whereas on Linux it requires Ctrl+Space. Copying in FreeBSD is the Enter key, in Linux is Alt+W. Go figure.</p>
<p>Also, almost everything in FreeBSD so far also seems to be oriented towards servers - all the way from Hard Drive partitioning and RAID arragengement, having sshd enabled by default and to setting the famous jails (which I have not tried yet). </p>
<p>This is not necessarily bad, but is a little different from my experiences in Linux, where things from a desktop's point of view are given a little more priority (suspending and hibernating, for example). Perhaps this is the same feeling that Windows or Mac users get when switching to Linux, though. I also have not tried a graphical environment in FreeBSD yet, so I might be a little biased.</p>
<p>I still have not gotten used to the C shell that is the default in FreeBSD, but thankfully bash was available as well, so switching over was not an issue. Also, this might be an opportunity to try out a new shell as well (even zsh, maybe?) </p>
<h2>Beautiful documentation that inspires</h2>
<p>Finally, I cannot end this post without talking about how wonderful the FreeBSD documentation is. I'd say it's so wonderful that it actually <em>inspires</em> you to keep using it and learning more about the system.</p>
<p>First and foremost comes the <a href="https://docs.freebsd.org/en/books/handbook">FreeBSD Handbook</a>, an extensive but very detailed document about every aspect of installing, configuring and using the OS. To me, the best thing about it is that it doesn't read like a traditional manual; instead, it's part tutorial, and part explanation that inspires you to try to apply those concepts to it. When reading it, you learn as you do, which is exactly what any learning process be.</p>
<p>The manual is also well structured and organized, with <code>man</code> sections clearly defined and concise explanation, sometimes even including an example usage. Most if not all commands referenced elsewhere in the FreeBSD documentation refer to the manual all the way to the section as well, like <code>vt(4)</code> to indicate that one should look up <code>man 4 vt</code> for more information.</p>
<p>The closest thing to this level of documentation I had found before was the Arch Wiki, which is, granted, much more didatic and much more complete (after all, it <em>is</em> a wiki). However, for a plain offline manual, the FreeBSD Handbook does an amazing job, reducing your needs to go into the IRC.</p>
<h2>What now?</h2>
<p>I still have way too much homework to do regarding FreeBSD, chiefly going graphical with it, and perhaps it will be alright. At any rate, I don't think I'm ready quite yet to leave my familiar Linux environment right now, but keeping both sounds like a great idea. Perhaps I'll leave my Raspberry Pi running FreeBSD to test its sturdiness.</p>
<p>And if anything, this study has taught me that setting up and using modern Operating Systems is not <em>hard</em>, but rather <em>time-consuming</em>. There's nothing hard in following prompts to install and configure software in the command-line any more than a graphical way, and "advanced" OSes are nothing more than a matter of getting down and doing some preparation before use. Keep learning, keep expanding and enjy the ride!</p>
<p>(And yes, this post was written entirely with FreeBSD!)</p>
<hr />
<p>This post is number #6 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>The lack of Free Software OSes on mobile isn't a software issue...</title>
        <link href="https://tilde.town/~kzimmermann/articles/linux_mobile_not_software_problem.html" />
        <updated>2021-04-07T14:19:33.969168Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>The lack of Free Software OSes on mobile isn't a software issue...</h1>
<p>This week Distrowatch announced the release of a new version of <a href="https://en.jingos.com/">JingOS</a>, a Chinese Ubuntu-based Linux distro oriented for tablets and mobile devices that aims to integrate touch, pen and keyboard/mice inputs, while maintaining full compatibility with GNU/Linux and Android alike. Truly an ambicious goal, but judging their showcasing at least makes me feel that they aren't far from it.</p>
<figure>
    <img src="https://en-cdn.jingos.com/wp-content/uploads/2021/04/desktoplock.gif" alt="JingOS showcase" />
    <figcaption>
        JingOS showcasing some of its pointing device integration.
    </figcaption>
</figure>

<figure>
    <img src="https://en-cdn.jingos.com/wp-content/uploads/2021/04/wps.gif" alt="JingOS showcase" />
    <figcaption>
        JingOS supposedly bridges the gap between tablet and computer.
    </figcaption>
</figure>

<p>JingOS's story shows us that GNU/Linux on mobile doesn't have to be the dream that so many had back in 2010 when Android was considered "Linux-ish" and Cyanogenmod was still cool, at least in a technical way: the software is advanced, and well developed. But if that's the case, how come we don't <em>see</em> Linux in more mobile devices? </p>
<p>The answer is: because lack of Linux on mobile is <strong>not</strong> a software problem - rather, a hardware one. Actually, not even a hardware problem - it's a vendor one. Ok, scratch that, it's not a vendor problem either - it's a <strong>policy</strong> problem. Companies simply don't want to make their devices more open and compatible with free operating systems and the people who tinker with them - starting with <a href="https://tilde.town/~kzimmermann/articles/google_app_ransomware.html">Google</a>.</p>
<p>AOSP? Please... It's as "open source" as the <a href="https://www.zdnet.com/article/google-should-really-open-source-chromium/">Chromium Browser</a>.</p>
<p>The consequence is that great software projects like this have to resort to building their own hardware in order to be shipped, instead of being installable in any device like it's done in the PC world. JingOS sells <a href="https://en.jingos.com/jingpad-a1/">its own tablet</a> with the OS preinstalled, just like PINE64 does with tablets, phones and even laptops, and does pretty much every other project that develops for mobile. But don't be fooled: this is <em>not</em> a limitation of the hardware or software being developed. </p>
<p>It's an <strong>artificial limitation</strong> postulated by those who'd rather not lose the exclusivity on their products.</p>
<p>I really would like to see more "Linux on mobile" projects, but if I could choose I'd rather have them take the Puppy Linux approach, and become installable in older phones instead of having to resort to new ones tailored only for them. PostmarketOS is a step in the right direction, but unfortunately the <a href="https://wiki.postmarketos.org/wiki/Devices">list of confirmed and supported devices</a> isn't very large (again due to that artificial limitation). </p>
<p>Until this issue gets fixed (by forcing vendors to open up, perhaps), I guess we'll still be piling up premature garbage, technological victims of planned obsolescence, in a manner similar to this <a href="http://peppertop.com/elvie/comic/elvie-041/">Elvie comic</a>.</p>
<p><img alt="Elvie #41" src="http://peppertop.com/elvie/wp-content/uploads/2018/07/Elvie_041_en-GB.jpg" /></p>
<hr />
<p>What's your feeling about Linux on mobile devices? Do you feel that with the intense "lobbying" of Google and manufacturers the situation is likely to get any better in the future? Let me know on Mastodon!</p>
<hr />
<p>This post is number #10 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Living in the Linux terminal - is it possible in 2021?</title>
        <link href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html" />
        <updated>2021-02-15T07:47:02.773483Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Living in the Linux terminal - is it possible in 2021?</h1>
<p>Perhaps one of the starkest differences you feel when switching from Windows to Linux for the first time is that sooner or later, invariably you have to do some form of work in the terminal. That is a scary prospect for the sheltered beginner indeed, since typing esoteric commands with mysterious <code>--flags</code> and values in a black screen with nothing but a prompt is the absolute opposite that someone born and raised in a graphical environment is used to doing. </p>
<p>It's not surprising that Linux has its fame for "not being as user-friendly" and beginner distributions are labelled so for abstracting away as much as possible the terminal work. However, like any other tool, once you learn how to use it, you get to know how powerful and convenient the terminal and the shell are, and eventually you get to the point where it's so integrated into your workflow that you can't live without it anymore. When this happens, you may need to shave off some of that <a href="https://www.drbill.tv/images/dilbert.png">Unix beard</a> that may have grown on you.</p>
<p>Jokes apart, the terminal is an integral part of Linux, and using it without touching the terminal is simply incomplete. And the best part is that, despite its 80s appearance, the terminal is by no means a piece of old technology anymore. The terminal evolved together with the GNU/Linux system not alongside, but rather as an integral part of it, and to this day remains the only practical way to do systems administration. The modern-day Linux command-line environment is a robust and practical tool, especially if you use integrate it to a modern graphical environment through a terminal emulator.</p>
<p>However, what if you were to take a step further into the text-only environment and ditched the graphical desktop completely? Yes, I mean pressing Ctrl+Alt+F1 and going back to the <a href="https://en.wikipedia.org/wiki/Linux_console">Linux Console</a> (or TTY) completely - no use for a mouse. Would it still be possible to "survive" for long enough using only the console in 2021?</p>
<p>To try this proposal out, I set myself a challenge and <strong>use nothing more than the console for one week</strong>. Every application that I used had to be usable from the console alone, without the use of the X server. I was still allowed to use my phone along the day, but anything involving the computer in my time off work had to be done in the terminal.</p>
<p>Doing this has forced me to learn more how you can solve something without resorting to full-blown graphical programs, and how eclectic the command-line can be. It also, unsurprisingly, didn't prove to be a more efficient way to do work - after all, scrolling, selecting and copying and pasting things with a mouse is pretty fast and convenient. Still, I had other interesting insights, which I'm sharing in this post. Read on!</p>
<h2>The Linux Console is not your Grandpa's Command Line Interface</h2>
<p>For starters, if you think that the Console is an old-fashioned piece of software just because there isn't any graphical interface or fancy menus to click, you should definitely think again. As stated before, the command-line remains the preferred way to perform administration work in Linux, and for much of the more "complex" tasks, might as well be the fastest - or only - way.</p>
<p>The console has also greatly benefitted from the fact that so many other applications were developed specifically for the text-only environment, processing and facilitating the way data is presented there so that you would not need to open a graphical program to interact with it. </p>
<p>Perhaps unlike some decades ago, today there are browsers, file managers, IM programs, <a href="https://github.com/andmarti1424/sc-im">spreadsheet editors</a>, music players, and many, many other text-based programs that work just as well as their graphical counterparts. Not to mention that you can usually "mimic" individual functionalities of programs by <a href="https://tilde.town/~kzimmermann/articles/dontlikeitcreateit.html">chaining around a few commands in a script</a>.</p>
<p>Furthermore, the console nowadays <em>is</em> by itself a modern environment: it fully supports localization, setting and using different keyboard layouts, you can set up the screen brightness, lock and suspend the session, and with a bit of training it feels just like using a graphical desktop environment. </p>
<p>The drawback is that, upon first use, the Console looks and feels rather harsh and unfriendly. Here's a few tips to make it more usable to work on it.</p>
<h2>Give the console a fresh look</h2>
<p>The standard console that greets you upon boot looks rather dull in comparison even with the terminal emulators you can use in a graphical environment, which is not very encouraging. Thankfully, like everything in Linux, you can also customize how it looks.</p>
<p>My first recommendation is to set a new display font. The default font is sort of ugly and usually very small for screens with modern resolutions, almost unreadable depending on your screen size. There are better selections of fonts available in your system already, usually available under <code>/usr/share/console/</code> or <code>/usr/share/kbd/</code>, with the <code>.psf.gz</code> or <code>.psfu.gz</code> extensions (<code>psfu</code> for unicode fonts). You can set a new console font with the following command:</p>
<pre><code>sudo setfont /path/to/font/file.psf.gz
</code></pre>
<p><code>sudo</code> is necessary because unlike graphical terminal emulators, the console is owned by the kernel. To reset to the default font, run <code>setfont</code> without arguments:</p>
<pre><code>sudo setfont # set the default font back
</code></pre>
<p>You can usually read off the size in pixels of these fonts directly from the file name: they are available in a HxW format. I set a font size of at least 20px height, and 22px is the optimal size for me. Very large fonts look ugly to me, and also limit how much stuff can be displayed at a time in the screen. Try a few sizes and find out what works best for you.</p>
<p>There are also many different character sets. The <code>Uni</code> or <code>Lat</code> charsets fit my needs, but you might have to try out a few to find the best. <a href="/~kzimmermann/shared/Uni3-Terminus22x11.psf.gz">This is the font I use</a> in case you're wondering, which I in turn got from Debian.</p>
<p>The next peeve with a default console is that the <strong>colors</strong> may not be as attractive as in some graphical terminals. Once again, though, this is customizable, though you need an extra piece of software to do so.</p>
<p><a href="https://github.com/EvanPurkhiser/linux-vt-setcolors">This tiny C program</a> is able to change the colors that are used in the console according to a color scheme file, very similar to how the <code>.Xresources</code> file works. Once you build and install it (just run <code>make</code>), you can use it to change the colors like this:</p>
<pre><code>sudo setcolors colorscheme_file
</code></pre>
<p>Where <code>colorscheme_file</code> is a file in the right format for <code>setcolors</code>. The Github repo contains a few examples and premade colorschemes that you can use right away, or you can make your own through tools like <a href="https://terminal.sexy">terminal.sexy</a>.</p>
<h2>Multitasking just like in a GUI</h2>
<p>The next step to making the console fully usable on par with graphical environments is to add to it multitasking capabilities. These are best done via software known as <a href="https://en.wikipedia.org/wiki/Terminal_multiplexer">terminal multiplexers</a>.</p>
<p>A multiplexer splits one large terminal screen into multiple subpanes that are independent of each other, resulting in an environment that feels like a tiling window manager. When you don't have a graphical environment available to spawn more terminals or tabs, the multiplexer is the only way to achieve multitasking.</p>
<p>Two of the most popular text-only multiplexers are <code>tmux</code> and GNU <code>screen</code>. I prefer tmux due to the fact that it behaves truly like client-server, with the possibility to detach from sections and back without terminating the session, even from remote machines. Although it might take some time to learn the keys involved in using it, tmux is fast and efficient, and allows you to set up a fully working environment very fast.</p>
<p><code>tmux</code> also adds a very important feature to working across multiple programs: a clipboard. This not only allows you to select, copy and paste text from the terminal panes manually just like in a GUI, but also allows you to copy the output of commands directly to the paste buffer, much like it's done with xsel or xclip, like this:</p>
<pre><code>command | tmux loadb - # hyphen required.
</code></pre>
<p>And then pasting it with <code>&lt;prefix&gt;+]</code>. It's a useful way to copy passwords from encrypted files without risking printing them to the terminal output.</p>
<p>Another rather new project, the <a href="https://sw.kovidgoyal.net/kitty/">Kitty Terminal Multiplexer</a> adds other modern stuff to the multiplexer world, like highlighting urls for easy opening by a browser. I haven't tried it myself, but hear lots of good things about this new project. There's even a multiplexer that includes visual effects and a screensaver built-in, called <a href="http://caca.zoy.org/wiki/neercs">neercs</a> (<code>screen</code> backwards). </p>
<p>Regardless of which one you choose, a multiplexer will help make your console work sessions persist consistently, even in the event of suspending the machine. To simulate a "lock on suspend" feature in the command-line only, for example, you could run:</p>
<pre><code>tmux detach # from within the tmux session, return to bare terminal
</code></pre>
<p>And then:</p>
<pre><code>loginctl suspend; logout # order a suspend and immediately logout
</code></pre>
<p>You'll find the username prompt upon awaking, and to resume work, just reconnect to your session via:</p>
<pre><code>tmux attach
</code></pre>
<h2>Weapons of choice</h2>
<p>Here's a short list of the programs I use in my console-only sessions, that sometimes even fare better than their GUI counterparts:</p>
<ul>
<li><strong>Web Browser:</strong> <code>elinks</code> - easily the best and most fully-featured text-based web browser.</li>
<li><strong>Instant Messaging:</strong> <a href="https://poez.io">poezio</a> - terminal-based XMPP client that supports OMEMO encryption and even anonymous (accountless) chats.</li>
<li><strong>Email:</strong> <code>mutt</code> - surprisingly powerful mail client, even if a little hard to configure.</li>
<li><strong>IRC:</strong> <code>irssi</code> - sinonymous with IRC, in my opinion.</li>
<li><strong>File Management:</strong> <code>mc</code> (Midnight Commander) - fully-featured file manager, with built-in support for sftp and many other protocols.</li>
<li><strong>Music:</strong>  <a href="https://en.wikipedia.org/wiki/Music_on_Console">Music on Console (moc)</a> - full-featured music player on the terminal. When that's not available (driver, etc), just use <code>mpv --shuffle MusicDirectory/</code></li>
<li><strong>Word processor:</strong> I edit everything "rich" as a markdown file with <code>vim</code>, then convert the output to HTML (like this very article!)</li>
<li><strong>Connectivity management:</strong> <code>connman</code> is a modern connection manager for Linux (like NetworkManager) with a command-line interface (<code>connmanctl</code>) that you can use to manage wifi networks or VPNs without the need for a GUI.</li>
</ul>
<h2>Media without a GUI?</h2>
<p>Finally, to address a growing and pressing question about living only on the console: <em>is it possible, after all, to view images or video without the X server?</em></p>
<p>The answer, surprisingly, is <strong>yes!</strong> It's possible to view images and video without starting X by using the computer's <a href="https://en.wikipedia.org/wiki/Framebuffer">framebuffer</a> device. It's a very primitive way to render images and graphical media, but works very well, and it was how early DOS games, for example, were able to run in graphical mode.</p>
<p>For images, the <code>fim</code> (Framebuffer IMproved) program can display images straight from the console via <code>fim image.jpg</code>. For video, the swiss-army knife of <code>mpv</code> can also handle it surprisingly well even on the console by specifying "drm" as the video output mode:</p>
<pre><code>mpv -vo drm video.mp4
</code></pre>
<p>You can even watch YouTube <em>straight from the console</em> if you also happen to have <code>youtube-dl</code> installed, through the following command, which is nothing short from mind-blowing when you think of it:</p>
<pre><code>mpv -vo drm https://youtube.com/watch?v=YOUR_VID_ID
</code></pre>
<p>As weird as it sounds, "drm" here most likely means direct-rendering mode or something, not <a href="https://tilde.town/~kzimmermann/articles/drm_or_piracy.html">Digital Restrictions Management</a>.</p>
<h2>Conclusion: still possible, but not without caveats</h2>
<p>It's possible to live in a Linux terminal even in 2021 for your daily tasks as long as you don't need to do very graphic-heavy work in your routine. Even as powerful as the console is, there are still some things like usage of javascript-intensive webapps, image-heavy work or quick handling of copy and paste that you simply cannot beat having a GUI and a mouse to work. Plus, if you a GUI you can always open a fullscreen terminal and mimic the workflow whereas the opposite is not possible.</p>
<p>The console still is, however, king on things like productivity (stripping out flashy interfaces and colorful images and buttons everywhere reduces distraction enormously), and resource efficiency (512 MB of RAM is more than enough to fit a full-blown work session). It also breeds a learning mentality, as you have to hack your way into commands and options, and how your computer really actually works.</p>
<p>If this describes how you like to use your computer, then I definitely recommend that you try out going console-only for just a bit, and see what you can learn.</p>
<h2>Addendum: additional resources</h2>
<p>There are many, many other programs and resources out there to enhance your experience with the terminal. Most source and present information from the web so you don't have to open a web browser, others add a little more features and convenience, and some work on data in an equivalent manner as GUI applications, so you don't have to leave the terminal. At any rate, checking them out is strongly recommended.</p>
<p>The <a href="https://github.com/agarrharr/awesome-cli-apps">awesome-cli-apps</a> Github repo contains a huge list of terminal-based applications for several categories.</p>
<p><a href="https://teddit.net/r/archlinux/comments/ho24p8/using_only_cli_for_arch_linux/">This thread</a> on the Arch Linux Subreddit also has some neat tips and tricks, including the aforementioned mpv trick to watch video. Definitely worth checking out.</p>
<hr />
<p>Have you ever used <em>only</em> the console or the command-line in your computer before? How was the experience? What did you learn? Share with me your thoughts at <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #5 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Internet Messaging done right</title>
        <link href="https://tilde.town/~kzimmermann/articles/messaging.html" />
        <updated>2020-09-18T06:14:41.976074Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Internet Messaging done right</h1>
<p>Technology did a fantastic job with facilitating communication between distant people over the internet. Unfortunately, this also came at a price: privacy concerns, hardware and software requirements, walled gardens and censorship to name a bit.</p>
<p>These are some of the things I use (or used) to communicate with my close friends and family in the age of surveillance:</p>
<h2>Email</h2>
<p>It isn't really that bad when you add two things to it:</p>
<ul>
<li>Encryption via <a href="https://gnupg.org/">GnuPG</a>.</li>
<li>Decent software that manages email in a sane way.</li>
</ul>
<p>If you can convince your concerned parties to communicate via encrypted email (and they understand how it works) awesome. Next, make sure you use a proper lightweight email <em>client</em> and that it supports PGP. <code>mutt</code> is a must on the command-line, otherwise <code>claws-mail</code> is pretty nice.</p>
<p>There's even an app that uses email as IM, plus enabling opportunistic PGP-like encryption when possible. Check out <a href="https://delta.chat/en/">Delta Chat</a></p>
<h2>Instant Messaging</h2>
<p>Say what you want, I still vote for XMPP.</p>
<p>Yes, classically it was naked without encryption, kind of verbose and standards are slow to move. But as of 2020, this reality has changed dramatically.</p>
<p><a href="https://en.wikipedia.org/wiki/OMEMO">OMEMO</a>, a modern multi-device encryption standard has matured and has been adopted into clients of many different platforms (no longer Conversations-only!), and for single sessions like a desktop, there's the classic option of <a href="https://otr.cypherpunks.ca/">OTR</a>.</p>
<p>Plus there's the advantage of the Federation unlike IRC, the implementations are basically universal. There are even <a href="https://conversejs.org/">Javascript-based clients</a>.</p>
<h2>Video and audio</h2>
<p>Hard to beat <a href="https://meet.jit.si">Jitsi Meet</a> for this one, although I wish more implementations of the service were available (are there?)</p>
<p>Some XMPP clients may do audio-only calls in a P2P fashion through the <a href="https://en.wikipedia.org/wiki/Jingle_(protocol)">Jingle Protocol</a></p>
<h2>P2P communications</h2>
<p>It's worth looking into <a href="https://tox.chat/">Tox</a> if you have a single device and would like to add a one-stop-shop solution for IM, Video and Audio chats. </p>
<p>Tox has many implementations with varying amounts of features, ranging all the way from the barebones command-line client (<code>toxic</code>) all the way to full-fledged all-inclusive graphical clients for desktop (<code>qtox</code>, <code>utox</code>).</p>
<p>There's also an app (Antox) for Android, but be aware that it consumes quite a lot of bandwidth and battery life.</p>
<h2>Things to avoid</h2>
<ul>
<li>Anything Facebook-owned including WhatsApp.</li>
<li>Walled gardens, including DMs in social media like Twitter, Instagram.</li>
<li>Apps that are not open source.</li>
<li>Signal or Telegram (seriously, just use XMPP on your phone)</li>
<li>Slack, IRC (again, why not just use XMPP...)</li>
</ul>
            </div>
        </content>
    </entry>

    <entry>
        <title>kzimmermann's take on the Old Computer Challenge!</title>
        <link href="https://tilde.town/~kzimmermann/articles/my_old_computer_challenge.html" />
        <updated>2021-08-02T06:40:13.661162Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>kzimmermann's take on the Old Computer Challenge!</h1>
<p>Somewhat late to the party, but it looks like a few weeks ago <a href="https://dataswamp.org/~solene/2021-07-07-old-computer-challenge.html">Solène Rapenne</a> started what's been called the <em>Old Computer Challenge</em>, in which you were to survive for one week using the lowest possible computing resources in your reach. </p>
<p>In my understanding, the goal was less of "who can revive the oldest computer ever," but more of a "how much can you do with a limited set of hardware" in sort of a sprint-vs-marathon thing. Originally, it was set within a specific time period (July 10th and 17th) and included a set of rules that composed the challenge, quoted from her website:</p>
<blockquote>
<ul>
<li>1 CPU maximum, whatever the model. This mean only 1 CPU / Core / Thread. Some bios allow to disable multi core.</li>
<li>512 MB of memory (if you have more it's not a big deal, if you want to reduce your ram create a tmpfs and put a big file in it)</li>
<li>using USB dongles is allowed (storage, wifi, Bluetooth whatever)</li>
<li>only for your personal computer, during work time use your usual stuff</li>
<li>relying on services hosted remotely is allowed (VNC, file sharing, whatever help you)</li>
<li>using a smartphone to replace your computer may work, please share if you move habits to your smartphone during the challenge</li>
</ul>
</blockquote>
<p>And perhaps the more important point (emphasis mine):</p>
<blockquote>
<ul>
<li>if you absolutely need your regular computer for something really important please use it. <strong>The goal is to have fun but not make your week a nightmare</strong></li>
</ul>
</blockquote>
<p>I actually had caught wind of this challenge back when it was announced, but sort of forgot it as the days went by, and did not realize it had happened until I found the hashtag floating around in Mastodon a few days ago. Bummer, I'm too late! Or so I thought?</p>
<h2>Do I qualify?</h2>
<p>Coincidentally, I may have indirectly participated in the challenge without noticing since just around the same time, <a href="https://tilde.town/~kzimmermann/articles/old_pc_new_tricks.html">I found a ten year-old computer in the trash and put it to good use</a>. Because I wasn't focusing in making the challenge, I may have "cheated" a few times because the one factor I wasn't paying attention was the RAM usage. This laptop has 4GB RAM (probably added beyond the original specs) and at times I did try to limit the usage to keep it under 1 GB. This time, I'm aware of the limitations and will go more strict with it: 512 MB RAM usage or some program gets killed (manually, but still). </p>
<p>I may have some experience regarding the "suffering" of this challenge since previously my other personal challenge got me <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">using only the command-line for a full week</a>. However, I also feel that, even though the tools used might be similar, this is a challenge with a different goal than sticking to the command-line. I believe the more relevant questions that can be answered are:</p>
<ul>
<li>Are there any modern graphical applications that still run lightweight enough in 2021?</li>
<li>Can you still browse the web in a functional and modern way without consuming a Gigabyte of RAM?</li>
<li>Are there any window managers that can truly be efficient and not take a chunk of the RAM, but still look pleasing?</li>
<li>If you had nothing but an old machine you found in the trash, could you still be technologically in line with the current environment?</li>
</ul>
<p>To answer that, let's see the software side.</p>
<h2>The software starter line</h2>
<p>Having interesting hardware to begin is just half of the equation. Let's see the software side of things. I'm using:</p>
<ul>
<li>Debian Buster as the OS.</li>
<li>Fluxbox as the window manager</li>
<li><code>netsurf-gtk</code> as the browser, with a fallback to <code>dillo</code> if it becomes "too heavy"</li>
<li>An assorted collection of minimalist applications and the CLI to gluing them together.</li>
</ul>
<p>Yes, my choice of OS is quite boring given the wild variety of distros being chosen by other people for the task (Slackware, Void, OpenBSD), and I know that I could've used <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> for an even more lightweight environment. </p>
<p>However, it was the easiest platform to compile and install the drivers for my modern 802.11ac wifi dongle. Using Debian here also presents another interesting spin: can we fit everything within the RAM requirements despite systemd?</p>
<p>Let's see how some of my common tasks have been faring.</p>
<h2>The tasks</h2>
<p>Previously, I had been working on this machine as almost the same as my work machine, really pushing the limits on what it could do with all that I had in my hands. With the artificial limits being imposed, however, we have to be more conscious. How much am I going to accomplish?</p>
<h3>Communication</h3>
<p>I am lucky enough that my closest people have adopted XMPP+OMEMO as their standard method of communications with me. This is one of the ways that I consider <a href="https://tilde.town/~kzimmermann/articles/messaging.html">right when doing internet communications</a>, and can easily covered in a Linux desktop in a variety of ways.</p>
<p>For pure text messaging, I love to use the <a href="https://profanity-im.github.io/">profanity</a> terminal XMPP client, especially as it has started to <a href="https://profanity-im.github.io/guide/latest/omemo.html">support OMEMO encryption</a> in more recent versions. Alas, the version packaged for Buster (0.6.0) does not support it, and trying to build the newest from source didn't work either due to the version of libc being linked to. There is alternatively a similar-looking messenger called <strong>poezio</strong> written in Python that also supports OMEMO, but the installation here fails as well, probably due to similar reasons as profanity.</p>
<p>I end up settling with gajim, which is a graphical client. More heavyweight than the former ones, but it works, and I can view the pictures with it. I can still compensate with choosing other lighter apps to fit under 512MB.</p>
<p>I'm also using irssi for IRC, though I don't chat there very often.</p>
<h3>Browsing and internet</h3>
<p>Browsers are the first and foremost challenge when it comes to lightweightness.</p>
<p>One one hand you want them to be your "second operating system" and be able to do/support every new thing that the W3C tosses at any cost, but on the other, you just want to read an interesting document with a few images here and there. Finding the balance between these is the key.</p>
<p>My first choice had always been dillo when it came to featherweight web browsers, but more recently, I discovered <a href="https://www.netsurf-browser.org/">netsurf</a>, which greatly impressed me. </p>
<p>Whereas dillo faltered and messed up some web pages with its CSS support, netsurf surprisingly rendered them quite well, and not at a large resource penalty: it takes about 150MB for a full browsing session. Plus, I could always shut it down and restart it later when needed since it starts up quite quickly.</p>
<p>The problem? Again, the netsurf package <a href="https://packages.debian.org/stretch/netsurf">is no longer in the Debian repositories</a>, having been removed around the Stretch era for some reason. Bummer, I thought, but thankfully, the build process from source was quite easy. A <a href="https://www.netsurf-browser.org/downloads/source/#BuildInstructions">detailed guide containing build instructions</a> accompanies the source code, and in fact, a quick setup script named <code>env.sh</code> makes installing dependencies very easy. </p>
<p>During the build process, however, I had no choice but go over the challenge's RAM limits. The usage climbed to 2GB during compilation, not sure how well would that have been if I only had 512MB.</p>
<p>If I have to run the browser parallel to another hungry resource, however, I'm quite happy to find the backup dillo browser, or even go full CLI with elinks. As long as the webpage is properly written, both browsers can display the content.</p>
<p>For the email, my usual choice of clients even in resource abundant systems is already pretty lightweight (mutt and sylpheed) so there was no difference there. I could not imagine using something like thunderbird in the 512 world, though.</p>
<p>The challenge here is Mastodon: how can I use it without firing up a large browser? Turns out that there's a command-line client for it called <code>toot</code> written in Python, that you can install via pip. It's great for reading and posting toots, but it doesn't have a good way to catch up on notifications within the client itself (available via <code>toot notifications only</code>). That's something that is still missing, but at least allows me to catch up with stuff. RAM usage: about 37MB.</p>
<h3>Media</h3>
<p>Browsing YouTube or another large source of videos directly is clearly out of the question. The alternative? MPV with good ol' youtube-dl.</p>
<p>Debian's own version of youtube-dl (auto-installed with MPV) certainly sucks, which is why I always download it straight from the yt-dl website, and simply keep it up to date via <code>youtube-dl -U</code> every now and then. The results are quite impressive given the browserless stack, and I can even write a script for it:</p>
<pre><code>#!/bin/bash
# Watch YouTube without a browser, lightweight mode

mpv --keep-open --ytdl-format=18 "$1"
</code></pre>
<p>Then call this via <code>yt some_url</code> or something from the Command Prompt (Alt+F2).</p>
<p>For my local music collection, I could've gone the uber minimalist way and done something like:</p>
<pre><code>mpv --shuffle --no-video *
</code></pre>
<p>Which would allow me to control the thing in the terminal with the <code>&lt;</code> and <code>&gt;</code> keys, but I thought it would be better to just avoid the thing altogether and go with a more sophisticated solution: <a href="https://moc.daper.net/">MOC</a>. Command-line, detachable, client-server, it's simply the best solution that I got for a lightweight music player. RAM usage sits at about 15MB with it playing.</p>
<h3>Managing documents</h3>
<p>This is the other biggie right after web browsing. Everyone talks about browser wars, yet an equally big one passes almost unnoticed: office suite. Granted, outside of work I almost never ever deal with word documents or spreadsheets, so I don't pay nearly as much attention to it than I do to the browsing. However, now that we have a target of maximum RAM consumption, can we do this?</p>
<p>Gnumeric and Abiword are the lightweight players in comparison to LibreOffice. They have their own styling and usability issues themselves, but I think that for most of cases are functional enough to read and edit most MS Office documents. However, like everything in a 512MB system, their lightweightness may be relative: Gnumeric consumed 72MB of RAM opening a basic spreadsheet. I don't know another graphical application any more lightweight than this. Killing other tasks to make way for the document can work too, though, as it starts up quite nicely.</p>
<p>And when it comes to PDF viewing, I think nothing really beats the simplicity of MuPDF. Quick to start, minimal on resources and always usable, it feels almost like a CLI application due to the extensive keybindings that can control almost everything in it.</p>
<h2>Findings and conclusion</h2>
<p>Here's finally the part that everybody wants to get to read at last: did I get some sort of big epiphany moment, some enlightenment that I don't actually need so many computing resources? Did it spark some revolution inside me that made me throw away my newer, super powerful machines and dig back my oldies as my production environments?</p>
<p>Sadly, <em>no.</em> One thing that I learned was that an Operating System, no matter how light and fast, will never make a slow computer run faster. However, there were indeed some interesting insights that I learned from this interesting experience:</p>
<ul>
<li><strong>Swapping is alright:</strong> ever since buying more RAM became something affordable, and a much better option to buying a new computer, I came to dislike the very concept of using the Swap space. "Eww, my computer is using 50MB of Swap, what's wrong with it?" Living in a more modest environment showed me that swapping is OK, not a nuisance, and that not every computing case requires 4GB RAM to work. In fact, today I'd say you can do a lot more <em>because</em> of swapping.</li>
<li><strong>You can close programs, too:</strong> much like the previous point, if you are running short of memory or resources, it's OK to close programs not being used! We tend to open a default set of startup programs as a knee-jerk reaction, but don't even get to use them during most of the time. One example: your mail client. Seriously, do you even spend even half an hour a day in total interacting with it? If you need to open a program but don't have enough memory, it's OK to close another one and re-open later. </li>
<li><strong>Computing with focus is more efficient:</strong> ever since tabbed browsing was invented, we have developed a habit for middle-clicking everything that we thought interesting to "read back later," only to realize after a few hours that 30+ tabs have emerged and you have no idea why. Not having resources to spare taught me to be more focused, and do every task I have in my computer with a purpose. You can achieve the same results, but with less resources.</li>
<li><strong>There's beauty in small:</strong> be it the speed and nimbleness with which applications perform, the cleverness of having everything glued together with a terminal multiplexer, small applications are beautiful. Like I stated, they won't make a slow CPU fast or anything like that, but the fact that they allow things to be done <em>in spite</em> of low resources is empowering. And that's a goal that everyone should strive for.</li>
</ul>
<p><img alt="Screenshot of the computer I used" src="/~kzimmermann/images/chunkyboi.png" /></p>
<p>So there you have it. I may not have done the challenge completely by the original standards, but the experience was similar, even if artifically emulated. Who knows, next time I might try it with my Raspberry Pi B with Alpine Linux on sysmode! </p>
<hr />
<p>Have you tried the Old Computer Challenge by Solène, or a similar one before? How did you fare and what did you learn? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p><strong>Edit:</strong> since so many people asked, <a href="/~kzimmermann/images/flatart1.png">here's the wallpaper I'm using in the screenshot above</a>. It's a wallpaper I found while randomly browsing Reddit.</p>
<hr />
<p>This post is number #24 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Notes on installing Arch Linux via archinstall for the first time</title>
        <link href="https://tilde.town/~kzimmermann/articles/notes_archinstall.html" />
        <updated>2023-01-22T15:38:39.873155Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Notes on installing Arch Linux via archinstall for the first time</h1>
<figure>
    <img src="https://tilde.town/~kzimmermann/images/artix-arch-difference.png" alt="Gus Fring meme: You installed Artix because you wanted to be SystemD-free. I installed Artix because I didn't want to bother with Arch's install procedure. We're not the same." />
    <figcaption>True story, it's why I tried out Artix in first place...</figcaption>
</figure>

<p>As of the time of this post's publishing, I've been using Artix Linux (not Arch) for <a href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html">about two years straight</a>. Before that, however, despite the fact that I had been using Linux for <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">about 10 years or so</a>, I embarassingly had not tried that many distros yet. Familiar territory was Debian and Ubuntu, and play was defined as exploring live distros like puppy every now and then.</p>
<p>Thus, in early 2021 I decided to take a rather radical approach to my software life and try a distro family that I had never tried before: the Arch-based distro of <a href="https://artixlinux.org">Artix Linux</a>. Though it shares much in common with Arch, package manager and all, ultimately they are two separate distributions, most evidently in the init system, and not 100% compatible with each other. Some even <a href="https://yewtu.be/watch?v=SVc6n5aOzy0">call it better than Arch itself</a>, although that is a quite strong personal opinion.</p>
<p>The results of this endeavor were amazing: over the year 2021 I diversified my OS portfolio to include <a href="https://tilde.town/~kzimmermann/articles/learning_freebsd_as_linux_user.html">FreeBSD</a>, <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a>, a little stint of Tiny Core on my Pi, culminating in a <a href="https://tilde.town/~kzimmermann/updates/20220905_0944.html">bare-metal install of the world's most secure Operating System</a>. All was going well. Last month, however, something else came to my mind. Why not try pure Arch install and see how things go? </p>
<p>The idea caught me by surprise, as usually my answer to that prompt was that I hadn't had the time for something that required a complex install setup. This time, though, I knew this excuse was dead: the Arch Linux project had released an installation script, akin to a wizard, known as <code>archinstall</code>. Could it be the time, then, to at last experience the pure and pristine Arch experience? You bet.</p>
<p>This is a quick post on my findings following my first serious install of Arch Linux using only the <code>archinstall</code> script - considered by many a "blasphemy" to the Arch way.</p>
<h2>What's the problem with a script?</h2>
<p>For starters, let me just say it up front that Arch having an installer by no means makes it "watered down" or n00bish. To me, a distribution has to be practical in first place to be powerful and useful. And what's more, pretty much <em>every</em> other practical distribution out there makes use of an installer script of sorts - be it Alpine, Debian, and even FreeBSD / OpenBSD. All of them are solid operating systems, and capable of installing very minimal systems up front for free customization later.</p>
<p>The bottom line is: the installer is a great way to get Arch up and running in a quick and simple way, but unlike installers of other distros, it's not completely foolproof - and may very well leave you with hanging with an unusable system. Thankfully, the fix is easy - but you gotta watch out for a few steps...</p>
<h2>Before running the script</h2>
<p>Ok, so you've downloaded, verified and burned your archinstall ISO. Slap that USB drive in and give it a boot! Watch the boot messages scroll (noticieably slowly!) by and at the end you've got a root shell prompt. All good to run the installer, right?</p>
<p>Not so fast. Apparently, judging from what I've discovered in IRC, the live ISO has a timing issue that fails the installer rather silently if you start it before the <code>iwd</code> daemon (which manages wifi connectivity) has started. It sure did for me a few times, which did left me with a raised brow for a few minutes. But then, again, I was on a laptop without a cable, I needed this to proceed.</p>
<p>So what do you do? Have some patience and wait a little more before attempting the install. Something like 3 minutes should work. And then try to connect to the internet, because that is the one thing that the installer will not do for you. From the live ISO, you use <code>iwctl</code>, a little like this:</p>
<pre><code># iwctl
[iwd]# station wlan0 scan
[iwd]# station wlan0 connect YOUR_SSID
(then enter your passphrase)
</code></pre>
<p>Here my WiFi adapter was identified as wlan0, but it may vary. Side note: this live Arch ISO is actually pretty neat. It contains goodies like <code>tmux</code>, <code>irssi</code>, the robust <code>zsh</code> shell and even <code>lynx</code> to browse the Arch Wiki if you need!</p>
<p>Now you're ready to run <code>archinstall</code> and let it (presumably) do the rest for you. Ready?</p>
<h2>During the install</h2>
<p>Go right ahead and run <code>archinstall</code>. You're going to notice that the screen will blink once in the process as the framebuffer activates a different font. Nice little detail.</p>
<p>One notorious thing that happened during my installation attempts was this: disk partitioning failed and aborted the script instantly. I'm not quite sure why, since the error messages were a little confusing, but maybe it was to do with my requests to encrypt the disk before installing the system. After two attempts with encryption failed, I sort of gave up and pursued instead an install unencrypted.</p>
<p>So in a way, if the disk prepping step presents you any trouble, my advice to you would be exactly this: go ahead and and just retry. That's right. Give another try or two and it will probably work. I'll be damned if know why, but hey, that's what worked for me.</p>
<h2>Don't reboot just yet!</h2>
<p>Everything went alright? Great. All clear and ready for that reboot, right?</p>
<p><em>NO!</em> Stop right there! Look, this is your last good step <em>with internet</em> before stepping into the cruel and unforgiving world of the raw Arch base install. Are you <em>really</em> sure that there isn't anything missing?</p>
<p>I can tell at least one package that you're going to miss: <code>iwd</code>. Wait, what? Didn't you just use that service to ensure internet connectivity all this time? Well, that's true, but here's the catch: that <code>iwd</code> belonged to the <em>installer</em> environment, afterwards chrooted into the disk environment - which <em>doesn't</em> include it by default. Thus, if you want to guarantee any sort of WiFi connectivity afterwards, you must install <code>iwd</code> before the big reboot. This is a big catch that I found out the hard way so take the lesson: install <code>iwd</code> or some other way to manage WiFi connectivity now.</p>
<p>Of course, you can install other packages now, too, as you wish. I didn't feel that need, since I knew that now, with WiFi secured, I'd be able to install them on demand. And with that, go ahead and do the big reboot, yank out the USB stick and watch the glory of your Arch machine rise from its definite install.</p>
<h2>Conclusion</h2>
<p>I feel that <code>archinstall</code> is a huge step forward in facilitating the adoption of plain Arch Linux (and not derivatives like Artix or Manjaro) to the larger Linux crowd. It's not perfect, but takes out about 80% of the total work required to install it. The greatest part is that hardcore old-schoolers can still be happy: a manual procedure is still alive and kicking, and could be leveraged for complete customization. Either way you'll end up with a baseline Arch system!</p>
<p>In hindsight, I later found a life-saving tip on Reddit about how some of these <code>archinstall</code> errors could be fixed before attempting the installation: <em>update it before you run it!</em> That's right. Remember that Arch is a rolling-release distribution; this means that even though you might think that the latest Archiso image released only two weeks ago might be pristine and cutting-edge, new stuff might have been released in that meantime. </p>
<p>Who knows if such updates might include updates to the installer as well? Hence, next time I'm installing Arch, I'm going to squeeze the following commands in, between the setting up WiFi and running the installer steps:</p>
<pre><code>~ # pacman -Syy # refresh pacman's cache
~ # pacman -S archinstall # update archinstall to latest version
</code></pre>
<p>Neat trick, quick and might as well solve all your previous headaches <code>;)</code>.</p>
<p>Another thing I noticed past the installation is that my (wifi) connection tended to drop after resuming the computer from sleep. The culprit turned out to that for some reason the routing table was erased, and thus packages could not be forwarded out of my LAN. I don't know what exactly is causing this, and haven't found a permanent fix yet, so in the meantime I'm hacking around it with:</p>
<pre><code># ip route add default via 192.168.1.1 dev wlan0
</code></pre>
<p>If it works, it works!</p>
<hr />
<p>How did you first install Arch Linux? Was it difficult? What was your experience with the installer? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post is number #42 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Free Software: We are (not) sorry to see you go</title>
        <link href="https://tilde.town/~kzimmermann/articles/not_sorry_free_software.html" />
        <updated>2021-04-17T01:33:01.348601Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Free Software: We are (not) sorry to see you go</h1>
<p>After seeing some people post their experiences ditching WhatsApp following their <a href="https://tech.hindustantimes.com/tech/news/whatsapp-updates-terms-of-service-accept-it-or-your-account-will-be-deleted-71609873162284.html">updates to their privacy policy</a>, I noticed a common theme on the account deletion process in those centralized, awful data silos:</p>
<p>"We're sorry to see you go!"</p>
<p>Says the "free" provider who for years has been using your data to make money while attempting to extract even more from you. That's right: "We're sorry" that you won't be providing us that source of money for free anymore. "Tell us what happened" so that we don't annoy or creep out our other suckers- I mean,sources of money like we did to you in the future. </p>
<p>Some of them might even attempt to persuade you further with a "don't go, can we do anything differently?" or the worst of all: "we'll keep your data here for 14 days (or, actually, <a href="https://www.dailydot.com/debug/facebook-account-deletion/">30 days</a> instead?), if you change your mind we'll be here for you."</p>
<p>Make no mistake: these providers care little about you - they're only sorry part of their revenue is going away. Wipe away those crocodile tears, and you'll find again the quintessential recipe of Surveillance Capitalism.</p>
<p>Look to the other side of the spectrum and you'll find our Free World of FOSS. No mining and monetization of personal data, services are volunteer-run or financed via donations - but still plenty to fight and argue about. Upon a big enough of a disagreement, people <em>will</em> leave the platform or even ditch the project. When that happens, are we sorry?</p>
<p>Ten years ago, I would've said yes. Today? <strong>Absolutely not.</strong></p>
<p>There was a time when I was just discovering the magic of FLOSS that I thought it was my duty to spread the word and try to convert as many people as possible to using Linux. Hey, Ubuntu is very easy, you know? It's as usable as Windows 7. Linux is lightweight, it's very fast. It's free now and forever, you can try as many as you want, and in all your old machines unusable with Windows and full of viruses. You can use a live medium, no install required...</p>
<p>Etc. Naturally, these were scoffed at, with sarcasm or things like "sorry dude, I'm not that geek yet" following afterwards. I don't exactly blame myself on the frustration for trying; I was young and very excited, perhaps even a little fanboy-like, and wanted to do good. But the effort in that personal "war against Microsoft" went rather fruitlessly.</p>
<p>Whereas back then I would rush for the defense with my digital pitchforks when some Free Software project was criticized for reason X, today I might as well agree with the person and say "yeah, you know, maybe that software just isn't cut for you." </p>
<p>Defeatism? I think not. Rather, I think that unlike proprietary data silo-like software, we don't need to have everyone in to survive. If somebody doesn't like us, they are not our audience to begin with - period. Why waste effort trying to rear them in when they - by design - don't even bring revenue? The people who are meant to use Free Software will keep using and improving it. No need to "one-size-fits-all" it for surveillance capitalism.</p>
<p>This is a natural consequence of <a href="https://www.gnu.org/philosophy/free-sw.html.en">Freedom 0 of the Free Software definition</a>: if you have the right to use the software in any way you want, you also have the right <strong>not to use it</strong>. This may sound obvious when you read it, but in the real world, it seems that a vast majority has no idea about it. </p>
<p>See, if you want to bitch around about how Linux "isn't ready for the Desktop" because of reasons like "it doesn't play games well" or "LibreOffice will never be a true replacement for MS Office," the door is right there, buddy. Nobody is asking you to use it, and <em>we</em> certainly don't have to keep up with your whining and attention-seeking. It didn't work for you? That's too bad - but we're not sorry. <strong>Just go, already.</strong></p>
<p>Is this what "being toxic" is? No, not at all. That's just giving users the right to exercise Freedom #0. Perhaps we should even spin it out into a new one so that it's crystal clear for everyone. Call it Freedom -1: The freedom <em>not</em> to use the software if you don't want to or like it.</p>
<p>So next time you read a toot where someone threatens to stop using Firefox for Chrome, GIMP for Photoshop, Linux for Windows or the like, do them a favor and wave them an <em>earnest goodbye</em>. It was a good try, but it simply didn't work for them. Thanks for trying, good luck, door's open if you want to come back, have a good day.</p>
<p>But make no mistake: we're <em>not</em> sad to see you go.</p>
<hr />
<p>This post is number #12 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Teaching an old laptop some new tricks</title>
        <link href="https://tilde.town/~kzimmermann/articles/old_pc_new_tricks.html" />
        <updated>2021-07-25T02:58:56.783819Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Teaching an old laptop some new tricks</h1>
<p>About a week ago, I found <a href="https://fosstodon.org/@kzimmermann/106602410133634200">an abandoned old laptop</a> by the curb of my trash area, and my <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">Dumpster diver's spirit</a> immediately prompted me to pick it up and give it a good cleanup and see if it was still on a workable state, or if I could at least still salvage some parts.</p>
<p>A week later, I'm daily-driving that very discarded machine at my workplace and it's performing great, despite only having a single-core Celeron CPU and 4GB RAM.</p>
<figure>
    <img src="https://fosstodon.b-cdn.net/media_attachments/files/106/602/083/065/385/920/original/76db70afebff4d9b.png" alt="A Fujitsu LifeBook A540/A receiving a fresh Debian install" />
    <figcaption>
        This Fujitsu Lifebook A540 was entirely revived and now runs with the big guys at my office.
    </figcaption>
</figure>

<p>This might as well be just another one of the many, many Linux on the desktop success stories out there saving old computers with a more secure OS, and it's definitely not the first time I'm doing it. Still, this time to me it had a special meaning; it was <em>really</em> like those movies where the underdog gets the prize. All of that from a simple stack of Free Software, and a USB wifi dongle. Follow more of the story in this post.</p>
<h2>The initial situation</h2>
<p>When I first inspected the laptop and found out that it was fully functional, with all parts working, my next question was to decide what OS I was going to install on it. I had a few candidates up my sleeve, including my recent successes of <a href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html">Artix</a> and <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a>, and also was pretty excited to try out BunsenLabs as well, given my <a href="https://tilde.town/~kzimmermann/articles/saving_artix_install.html">positive recent experience</a> as a rescue medium, but that one failed during the installation.</p>
<p>In the end, however, I ended up settling with Debian Buster, which I had previously tried in - ironically - another salvaged machine, and was wondering if it was lightweight enough for the machine. Speaking of which, here are the unimpressive specs of it:</p>
<ul>
<li>CPU: Intel Celeron 900 @ 2.2GHz (<a href="https://www.notebookcheck.net/Intel-Celeron-M-900-Notebook-Processor.33961.0.html">single core!</a>)</li>
<li>RAM: DDR3 400MHz 4GB</li>
<li>Disk: 160GB HDD</li>
<li>Ethernet port, but <strong>no built-in wifi</strong>.</li>
</ul>
<p>Ok, so that's not the weakest machine I've ever worked with, but these specs are low even for a 2010-made machine (even my 2006 Dell had a better CPU). The CPU seems to be the big bottleneck here, so running out of memory won't be too much of a concern, I think. A more surprising point is the lack of wifi - people, this laptop was built in 2010! How come it has no support for this minimum amount of mobility?! Luckily, though, I have some spare USB WiFi dongles, which I use mostly when the built-in wifi isn't supported out of the box, so not everything is lost yet.</p>
<p>So, in a context like that, how does Buster fare as a desktop?</p>
<p>As it turns out, <em>very well</em>. Compared to other super-minimalist distros, it offered a great balance between the speed and usability, with pretty much everything I would need to use working fast enough. However, the lack of wifi is indeed out of place with 2021 standards, so we will fix that next.</p>
<h2>Getting it to work in a modern environment</h2>
<p>I'm no stranger in not having my wifi working at times - I lost count on how many distros I've booted without connectivity, or had to activate it after the install. The fix is easy enough: insert USB WiFi dongle, then select it from the GUI network manager. So I do that, but still no juice. What the hell?</p>
<p>It turns out that due to Debian's <a href="https://www.computerworld.com/article/2723388/debian-gnu-linux-seeks-alignment-with-free-software-foundation.html">policy of not including nonfree software repos by default</a> meant that drivers for my dongles were missing from the base install, and couldn't be installed even from a cabled connection. Oops. No biggie, though, as I could simply download the missing firmware debs from another machine, and install them locally, and once I had WiFi working, it was just a matter of enabling the <code>contrib</code> repos to the apt sources list.</p>
<p>However, that is a rather simple dongle, that only works with the 802.11n standard of WiFi, which does not support 5GHz channels, and for today's infrastructure can be considered quite old. I have another 802.11ac dongle that is much faster, but can I install the drivers for it?</p>
<p>This <a href="https://github.com/jeremyb31/rtl8812au-1">community-supported repository on Github</a> maintains a more recent fork of a Kali-only unofficial driver for my device (rtl8812au), and luckily it was fairly easy to install. I needed to install the <code>dkms</code> package in order to do the kernel module install, but that was pretty straightforward:</p>
<pre><code>apt-get install dkms
git clone https://github.com/jeremyb31/rtl8812au-1
cd rtl8812au-1/
./dkms-install.sh
</code></pre>
<p>I presume that as long as the distribution has support for <code>dkms</code> probing and compiler support, it should be the same process.</p>
<p>The compilation from source was straightforward, but did take a while given this CPU. At the end of the install process, automated through the script, the module installed was loaded, and good to go. Next step, plug in the receiver dongle, figure out its weird device identifier (<code>wlx28ee52bcbc13</code> here) and add it to wicd. Modern WiFi is ready!</p>
<h2>The bottom line</h2>
<p>Even as I write this essay, this seemingly obsolete machine is alive and kicking, doing real work alongside the other much more modern machines in the office. Even though some tasks like watching videos do take their toll on it, it's still perfectly usable with the addition of only a few peripherals. I've even set up a Tor hidden service and was able to SSH to it from my house with no problems. RAM usage sits at around 2.0/3.76GB with peak browser and other resources usage, and yet the system remains responsive without swapping.</p>
<p>I was extremely impressed with how this truly underdog, a "granpa" machine became a useful workhorse thanks to GNU/Linux and Free Software. It's not ready to become the next PCMasterRace top rig, but dammit, it works <em>very</em> well, even when I play <a href="https://diode.zone/videos/watch/7599d20b-1bff-4648-bcb5-44072e8d5b89">AssaultCube Reloaded</a> (no lagging under low details!) </p>
<p>I have the feeling that "back in the day" it would've been quite a good machine in comparison. I'm thinking about taking it back home and trying out a few other distros in it to lower down the RAM, but I'm quite happy with Buster.</p>
<p>Linux wins yet again!</p>
<hr />
<p>What was the oldest or weakest machine that you were able to "revive" using Linux? Have you got some other old hardware stories to share? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #22 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Where does your personal data go after the trash?</title>
        <link href="https://tilde.town/~kzimmermann/articles/personal_data_trash.html" />
        <updated>2021-06-08T08:28:17.925011Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Where does your personal data go after the trash?</h1>
<p>My recently-found hobby of <a href="https://tilde.town/~kzimmermann/articles/dumpster_diving_hacker.html">Dumpster Diving</a> certainly brings me some very interesting things that people chuck away as some sort of "useless technology" that can be amusing or useful to me, or most of the time both. Aside from the technology items, sometimes there are also other interesting findings, like <a href="https://fosstodon.org/@kzimmermann/106085428104540879">old music CDs</a> and even pirated DVDs of varying quality. </p>
<p>There is, however, one worrying point that keeps showing up more and more in recent dives: <strong>personal data of previous owners</strong> discarded inside or alongside other pieces of "useless" technology and deemed lost or "unrecoverable" forever - that is, until someone else picked them up and realized they actually contained data.</p>
<p>When I first noticed this pattern, it seemed no more than just a small amusement for me. I'd laugh at the previous owners' carelessness of simply chucking thumb drives and SD cards away with not even remembering to delete the files first, and proceed to format the medium for my proper secure usage. However, as I kept finding more of these things carelessly discarded, it grew into a sort of collective worry for them, in the likes of "shoot, are these people seriously doing this?"</p>
<p>Rather than waste my energy worrying or trying to convince distracted users, I decided to put in this post some of the more extreme or alarming findings from my dumpster diving adventure, in addition to what we can do to protect ourselves from them.</p>
<h2>Your toys have your PII</h2>
<p>One of my recent findings from dumpster diving is an intact 2014-era Nintendo 3DS, a relatively rare find among the more usual broken screen or torn-apart ones that I find discarded in my area. But more interestingly, upon inspection it came with the SD card inside. Could it still be working?</p>
<figure>
    <img src="/~kzimmermann/images/ds_trash.jpg" alt="A pink 3DS and an SD card" />
    <figcaption>
        An intact Nintendo 3DS recovered from the trash... with the SD card inside!
    </figcaption>
</figure>

<p>I thought about wiping it clean to sell it, but it turned out that the device itself was locked with some sort of <a href="https://play.nintendo.com/parents/crash-courses/parental-controls/">parental controls from Nintendo</a> that prevented me from factory resetting it. However, the wonderful hacker community once again surprised me when I found a very clever <a href="https://mkey.salthax.org/">Master Unlock Code Generator</a> from a guy who reverse-engineered Nintendo's algorithms. It worked perfectly and with that I had just "rooted" the device for myself. Small win for the day!</p>
<p>A larger treasure was the SD card still inside the device - could it still be usable? At 2GB in size it wouldn't be too useful for storage, but would definitely have its place as a boot medium for things like Puppy Linux. After setting up a live Linux session on my test machine without hard drives (malware could jump, I guess, and I have time to prepare the environment), I plugged in the SD and found out it was actually chock full of things - some of them <em>very</em> personally identifiable.</p>
<p>Most of the data stored were medium-resolution screenshots. I presume that, as part of the parental control mechanism, the DS <em>took a screenshot of whatever game was running every 10 seconds or so</em>, in a manner similar to the way Cellebrite, <a href="https://www.middleeasteye.net/news/signal-israel-intelligence-cannot-hack-phone">the software that once boasted it supposedly had "hacked Signal"</a> works (unlike Cellebrite, though, I don't know whether these screenshots were transmitted over the network as well). I figure that this way, insecure and easily impressible parents could confiscate and "inspect" the DS at some unexpected time and know if their child was looking at porn from the DS's browser or playing a game not intended for their age range. </p>
<p>Whether this is efficient parenting is debatable (I think not), but it surely presents quite a lot of valuable identifiable information should the device end up on the wrong hands. For example, one of the games loaded on this DS is an <a href="https://en.wikipedia.org/wiki/Augmented_reality">AR Game</a> that overlays "enemies" decorated with you and your friends' faces over the DS's camera feed from the place you're in. You can shoot these enemies and score points and walk around the area to get additional powerups. And coupled with the constant screeshooting function, one can have quite a detailed idea of <em>how the child's house looks like from the inside</em>, what their family looks like and who they spend time with.</p>
<p>Concerns over this sort of AR merging on top of real data are <a href="https://www.forbes.com/sites/thomasbrewster/2016/07/11/pokemon-go-google-privacy-disaster/">as old as Pokémon Go</a>, and here we have another attack vector: the evidence is left unencrypted in the local storage. No forensics needed - lovely.</p>
<p>Next, we have the terrible but ridiculously overdone idea of the internal-facing built-in camera of the 3DS. I mean, we aren't spied enough yet in our daily life, so we should add yet another camera facing us up close, right? And as most children these days are being taught to do, there were the selfies. Alone, with friends, at home, at places - more PII being leaked around. Not much of a problem if it's only local storage and it's being shared with only your trusted friends, but a larger threat should these fall in unintended hands. And as we just saw, there's a small chance that when you throw it in the trash without formatting, someone curious could still pick it up.</p>
<figure>
    <img src="/~kzimmermann/images/data_risk4.png" alt="selfies that could identify someone" />
    <img src="/~kzimmermann/images/data_risk6.png" alt="selfies that could identify someone" />
    <figcaption>
        Selfies like these could personally identify you if they fell in the wrong hands.
    </figcaption>
</figure>

<h2>How much identifiable stuff can fit in such a tiny thing?</h2>
<p>Next up is a USB drive I also salvaged from the trash. It's <em>really</em> tiny in size, with the storage "head" being about 0.5mm not counting the USB connector itself. Yet, this one was a real treasure - 32 GB of storage in apparently good condition.</p>
<figure>
    <img src="/~kzimmermann/images/storage_trash.jpg" alt="tiny 32GB mass-storage USB device" />
    <figcaption>
        Storage density is becoming quite extreme...
    </figcaption>
</figure>

<p>Likewise the DS, I guess the previous owner didn't think anyone else could find it - even I had trouble identifying it along the mountain of trash it was in - and threw the device without formatting or encrypting it. I'm guessing he was a middle-aged small business owner from the contents of it since, instead of pirated MP3s, what I found was his plans to staff and operate a business:</p>
<figure>
    <img src="/~kzimmermann/images/data_fail1.png" alt="Contents of the USB showing some HR documents" />
    <img src="/~kzimmermann/images/data_fail2.png" alt="More real life information" />
    <figcaption>
        Detailed plans and personal information of employees available in plain sight. What a PR disaster would it be had this been with a large well-known corporation.
    </figcaption>
</figure>

<p>This goes to show that sometimes, however much you try to safeguard your own information, it ends up in the hands of someone who is careless and doesn't treat it securely enough. <a href="https://tilde.town/~kzimmermann/articles/senpai_you_have_been_pwned.html">Yes, I'm looking at you, online services.</a> And if this person backed up these contents into some sort of "cloud" storage, then my possession of this information is <a href="https://tilde.town/~kzimmermann/articles/project_128.html">really the least of his privacy concerns</a>.</p>
<h2>Not even your phone?</h2>
<p>Last but not least, we have the case of someone who dumped a perfectly-working smartphone in the trash as "broken" presumably because the screen cracked. There was no SD card in it this time, but that didn't protect the internal storage. And for some <em>spectacularly dumb</em> reason, this phone's firmware actually allows anyone to view the pictures taken from it - <strong>even if the screen is locked and you don't have the password.</strong></p>
<figure>
    <img src="/~kzimmermann/images/lockscreen_fail-1.jpg" alt="Phone is locked and I don't know the password" />
    <img src="/~kzimmermann/images/lockscreen_fail-2.jpg" alt="All pictures can be seen without the password" />
    <img src="/~kzimmermann/images/lockscreen_fail-3.jpg" alt="All pictures can be seen without the password" />
    <img src="/~kzimmermann/images/lockscreen_fail-4.jpg" alt="All pictures can be seen without the password" />
    <figcaption>
        Don't have the password? It's alright. Just open the camera app from the lockscreen of this Huawei phone and start swiping left. You can see <em>every single picture and video</em> that was ever taken with it.
    </figcaption>
</figure>

<p>Unbelievable. I think this sort of developer irresponsibility really takes the cake for privacy failure. I didn't even bother accessing the phone's content from my computer after this low-hanging fruit.</p>
<h2>Appendix: What can you do to protect yourself?</h2>
<p>Needless to say, I wiped all the aforementioned storage devices clean and encrypted them for my own use. I want no data from these previous owners, and the less I know about them the better.</p>
<p>First off: <strong>encryption</strong> is a must if privacy is desired. The two go hand-in-hand and there's no realistic expectation of privacy and security without end-to-end encryption. And the best part: it's <em>easy to do</em>! If you install <code>gnome-disk-utility</code> in any modern distro (including <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine</a>), you get nice and easy GUI that can format <em>and</em> encrypt the volume as easily as using a wizard.</p>
<p>There's only one downside: the volume becomes readable only on Linux. I don't consider that a bug though, but rather a feature.</p>
<p>Disk encryption not only ensures the confidentiality of the data during transportation, it also ensures that if you lose it, or would like to discard the medium, anything it holds is basically unreadable to anyone without your password. However, if you haven't done so, you should <strong>securely wipe off</strong> any data that the medium holds before discarding it. And no, pressing Ctrl+A and Del from a graphical file manager doesn't count. </p>
<p>There are multiple ways of accomplishing this on Linux, <code>dd if=/dev/zero of=/dev/sdX bs=1k count=4096</code> being one of them, and even the simple <code>mkfs.vfat /dev/sdX</code> usually sufficing. Most of the times, there's no need to <a href="https://dban.org">DBAN</a> it completely. A fair warning, though: due to the way that SSDs and some flash media behave to "reduce wear," <em>even this might not delete 100% securely</em> the contents of the drive. A very motivated adversary could still use forensic recovery options to analyze the content. Most casual adversaries, like almost all dumpster divers, though, will consider it lost.</p>
<p>Finally, another lesson that remains is: <strong>use software that respects you</strong>. Would you live in a house that randomly opened up the window blinders without your consent to let outsiders peek in? Or own a car that would unlock itself for any stranger to get in and see what's inside of it? Because that's the sort of vibe that I get from OSes shown in the phone and the DS of this post. Using a <a href="https://tilde.town/~kzimmermann/articles/dontlikeitcreateit.html">Free Software</a> operating system can be a good, if not the best, way to ensure that your privacy is protected. </p>
<p>Digital self-defense is a must for everyone. To think that security is something that can be outsourced completely today is carelessness at best, but, as my experience dumpster diving has shown, unfortunately a concept still widespread among computer users.</p>
<hr />
<p>Have you ever uncovered PII from some unsuspecting piece of technology deemed lost forever? What would you recommend that people do before discarding their Tech? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p><strong>Note:</strong> kzimmermann is a strong personal privacy advocate and will <em>never</em> disclose anyone's personal information in a vulnerable manner for any reason whatsoever. All the examples shown here were carefully and responsibly redacted, and the original data destroyed with no possibility of recovery. All the storage media is now owned by me and encrypted with LUKS, with no possibility of me accessing the original owners' files.</p>
<hr />
<p>This post is number #19 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Journey through enabling an EPSON MFD to print and scan via network on Linux</title>
        <link href="https://tilde.town/~kzimmermann/articles/printing_scanning_epson_linux.html" />
        <updated>2021-12-29T03:54:58.834856Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Journey through enabling an EPSON MFD to print and scan via network on Linux</h1>
<p>Once again, another sort of note-to-self sort of thing that thankfully will be useful when I have to work with another EPSON multi-function printer in some household. This evening's mission once again <a href="https://tilde.town/~kzimmermann/updates/20211012_1305.html">involved printers</a>, and I'm actually finding them to be a much less painful experience than I previously thought. </p>
<p>So anyway, this is how you make scanning and printing work in Linux with them (Model in my case: EPSON L375).</p>
<h2>Printing</h2>
<p>Printing is fairly straightforward: even though CUPS does not have its PPDs out of the box, they can be installed with EPSON's open source printer drivers for Linux. Your distribution should have a package referring to <code>epson-escpr</code> or something similar, install it. It's the package that contains a generic printing driver that many EPSON printers use. </p>
<p>In Ubuntu and Debian the package is called <code>printer-driver-escpr</code>, in Arch it's available <a href="https://tilde.town/~kzimmermann/articles/aur_made_easy.html">to be built from the AUR</a> under <code>epson-inkjet-printer-escpr</code>. If your distro doesn't offer it, the source for compilation can still be <a href="http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX">downloaded from EPSON's website</a>.</p>
<p>After this step, CUPS should be able to nail your printer's drivers, listing a wide variety of other models from EPSON as well. Half of the mission is complete.</p>
<h2>Network Scanning via xsane</h2>
<p>Scanning as usual was slightly more of a pain in the ass, as the printer would not be "autodiscovered" via <code>xsane</code> etc, and feeding its IP address manually would also be fruitless. So here's what worked: EPSON also offers a scanning driver in a package called <strong>iscan</strong>. Use your package manager to find the appropriate package, or if not available, <a href="http://support.epson.net/linux/en/iscan_c.html">download and compile from source</a>.</p>
<p>Once the package is installed, you must do some additional configuration with the <code>sane</code> files in order to allow the scanner to be autodetected within the network once you open your scanning application. This package has also installed the file <code>epkowa.conf</code> that contains the definition for the EPSON scanners, and this has to be included in the main config file. Edit the file <code>/etc/sane.d/dll.conf</code> and add the following line:</p>
<pre><code>epkowa
</code></pre>
<p>Then, edit <code>/etc/sane.d/epkowa.conf</code> to add the following definition:</p>
<pre><code>net XXX.XXX.XXX.XXX # this must be the IP address of the printer
</code></pre>
<p>Now open xsane (that's right, no need to restart some service!) and watch your scanner be autodetected straight from the beginning. Rejoice!</p>
<p><strong>Note:</strong> if you can't find the IP address of the printer (my L375 model, for example, does not have a display to show or configure stuff), you can use <a href="https://nmap.org">nmap</a> to try to locate it along your network. No need to go full Elliot Alderson/hackerman here, a simple query like this should do the job:</p>
<pre><code>nmap -sS 192.168.XXX.1-255 &gt; network.txt 
# change XXX accordingly to match your subnet mask
</code></pre>
<p>In my case, ports 515 (printer) and 9100 (jetdirect) were open. For other models, the port numbers may be different but still there's the giveaway.</p>
<h2>Further reading:</h2>
<p>Credit where it's due: <a href="https://srm.gr/scanning-over-network-linux-xsane-and-wifi-or-ethernet-scanner-epson-l386/">https://srm.gr/scanning-over-network-linux-xsane-and-wifi-or-ethernet-scanner-epson-l386/</a>. Thank you, <a href="https://srm.gr">Bill Seremetis</a>, for this! Hope other people can find your site.</p>
<hr />
<p>This post is number #29 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Project 128: how much space do you *really* need?</title>
        <link href="https://tilde.town/~kzimmermann/articles/project_128.html" />
        <updated>2021-02-10T10:53:12.170046Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Project 128: how much space do you <em>really</em> need?</h1>
<p>I've started a personal project that will revolutionize the way I treat my files and start to rethink my whole backup and storage strategies in my digital life. I'm calling it <strong>Project 128</strong>, and the goal of it is to try to <strong>fit every digital file that I own under the space of 128 GB</strong> (not including the backups). </p>
<p>The 128 GB limit is somewhat arbitrary, but I deliberately chose a lower boundary than most commercial HDDs nowadays because I wanted to force myself to think more about what files I choose to keep and what are their importance to me. Besides, buying 128 GB of storage as a USB drive or MicroSD card is cheap enough nowadays, so that helps.</p>
<p>So why, in a world where new data is constantly being produced faster than anyone (and anything) can process, and storage tends to become progressively cheaper, am I artificially lowering my storage limits to such a ludicrous small number? </p>
<p>My answer is because <em>most of the data we own today is not important</em>, rather just nice to have, and our real crucial data can be made <em>much smaller than we think is possible</em>.</p>
<p>I had already previously written about my bone to pick with <a href="https://tilde.town/~kzimmermann/articles/digital_minimalism.html">file bloat</a>, that is, the fact that just like software, user files have been growing larger unnecessarily without adding value. High-definition pictures are 5MB each - does that much definition really make a difference at all? A full length movie is now 2 to 3GB a piece - does "quality" really justify such a humongous size? The examples go on, but it's not the point of this article.</p>
<p>Besides, as a positive side effect, having everything backed up neatly in a small space makes trying and transitioning between Linux Distributions (distrohopping) much easier. We can think of it as the digital equivalent of travelling light along several hotels in a road trip.</p>
<p>My approach to fit everything under 128GB (or less, if possible!) will follow three major steps:</p>
<ol>
<li>Defining what's mandatory and what's optional</li>
<li>Reducing the size of everything as far as possible</li>
<li>Storing and backing up</li>
</ol>
<p>These are better explained below:</p>
<h2>Define what's important</h2>
<p>Perhaps the most important step in this project, deciding what's absolutely important to be kept and stored from what's optional or a nice to have is the first step to having a sane storage and backup policy. Of course, given the capacity, we'd rather store everything we own, but the amount of stuff that <em>really matters</em> in the end might actually be significantly smaller.</p>
<p>Stop and think for a moment: if a disaster were to happen and you could only save a small number of files from your hard drive, which ones would you save? Chances are it's much, much less than the total stuff you own. Chances also are that it's not the stuff you interact with every day either, but rather files that are pretty much immutable and that have a value of specific importance to you. </p>
<p>Maybe its your tax declaration statements from five or ten years ago that you must retain for bookkeeping, or the pictures you took from a vacation in a beautiful place last year. Or the pictures of loved ones at a family reunion. Or your PGP and SSH keys. Each one of these has a specific reason to be important to you.</p>
<p>On the other hand, the data you interact with on a daily basis is likely changing all the time or is likely to have at least a few different sources, like a download from the internet. Code, images, web pages, documents, nowadays with so much content being produced and shared around, you're most likely to see the same thing available from many different sources. Which means that if you lose your copy, it wouldn't be hard to get it back again.</p>
<p>In my personal implementation of Project 128, I'll be using the following criteria to evaluate what's important along my files, evaluated as an "OR" condition:</p>
<ul>
<li>The file has personal significance to me, be it emotional value (ex: Pictures, work I developed in the past), historical value (ex: tax declaration, official certificates, memoranda) or other specific personal value.</li>
<li>The file has security implications tied to it (ex: SSH and PGP keys, password databases, etc)</li>
<li>The file has personal information that can ID me. Registration forms, Government documentation, documents from the city's service providers, etc all qualify.</li>
<li>I cannot realistically obtain the file in any other way except from previous backups (i.e. can't be downloaded from the internet)</li>
</ul>
<p>Although these guidelines are sort of strict, they do make it clear that most of the stuff you deal with daily is not irreplaceable, and if you lose it, the impact is minimal. Not so with the stuff that qualifies to the above, so that's the core for Project 128.</p>
<p>Following these guidelines, I estimate there's between 20 to 30GB of these core files among my stuff, so that still leaves me with plenty to store the rest of the "non-essential" stuff. Unfortunately, that non-essential stuff is much, much larger.</p>
<h2>Reduce the size when possible</h2>
<p>Theoretically, anything that is non-essential can be obtained back from the internet somehow, but in practice it's much better and safer to have it locally. Since we judged the important stuff away already, it's harder to grade non-essential stuff in terms of importance, so another strategy is necessary.</p>
<p>Luckily, almost all sorts of media files can be reduced away without losing value. This is great news, since media likely is the largest space hog in your hard drive. </p>
<p>The pictures and video you've been taking in bulk since you got your first smartphone are likely to be much larger in resolution than they need to be. Your videos are likely way larger than they need to be (<em>b-but it's 4K!</em>) and a big reduction will not affect the experience if all you do is watch it from your laptop or phone. Music is usually small enough in size, but the opus format (.ogg) has a slightly better compression than the popular MP3.</p>
<p>You can reduce images by using the <code>convert</code> command that comes in the <a href="https://imagemagick.org">ImageMagick</a> program. Reducing an image with it is as simple as running the following command:</p>
<pre><code>convert original_image.jpg -resize 50% small_image.jpg
</code></pre>
<p>Where <code>small_image.jpg</code> would now have half of the <code>original_image.jpg</code>'s width and height - essentially reducing the resolution to <em>a quarter</em> of the original size.</p>
<p>This resolution trick is pretty useful since it means that for every time that you reduce the size of the "sides," you're essentially reducing the final image <em>size</em> by a factor of its square: a 1/2 reduction produces an image with 1/4 of original size, 1/3 reduction 1/9th, and 1/4 reduction reduces the original to 1/16th of the original size. I have reduced pictures from my phone to 1/16th of the size before, and the "loss" in quality is negligible.</p>
<p>Video can also be reduced in a similar way through the <code>ffmpeg</code> program. The general syntax is:</p>
<pre><code>ffmpeg -i videofile.mp4 -vf scale=&lt;final width&gt;:-1 output.mp4
</code></pre>
<p>This way, ffmpeg resizes the video to the desired final width in pixels, and automatically calculates the height to maintain the aspect ratio. And just like with images, the reduction in size is proportional to the square of the reduction in width. Hence you usually can make your videos much smaller without losing quality significantly.</p>
<p>Unfortunately, I do not know of a way that you can reduce or compress MP3 audio in a simple way as described above. I'm OK with that, though, since audio is generally pretty small, and my collection of it is not very large. But if you know how to "compress" audio in a similar way, please let me know in Mastodon.</p>
<h2>Storing and backing up everything</h2>
<p>By performing the steps above in reducing the unneeded size of media files, you will have reduced your required storage space in a significant way. My hope is to be able to fit everything under the 128GB mark after performing these reductions accordingly. The next step is to start storing and backing up everything.</p>
<p>There is a simple and easy to remember backup strategy that goes by the name of <a href="https://www.acronis.com/en-us/articles/backup-rule/">3-2-1 backup</a>. In summary, it states that you should have at least 3 copies of each backup in 2 different media, and where 1 copy is kept off-site (physically out of the place you usually work with the data). It might not be perfect or best suited for enterprise-grade backups, but it's enough for the threat model described here.</p>
<p>For the local backups in different media this is an easy and cheap task given the low prices of "small" storage lately. I can buy a 128GB SD card or USB pendrive for quite cheap, with the other media being a 320GB external HDD or even a smaller 160GB external SSD - both still much cheaper than their TB-large counterparts.</p>
<p>My personal requirement is that the storage medium itself must be encrypted, and thankfully that's easy to do through graphical applications that manage storage volumes. The Gnome Disk Utility, for example, makes the entire process of formatting and encrypting external storage very easy. There's also GParted and other programs that can do that as well.</p>
<p>The encryption requirement not only protects it from unwanted access, but also is the best option in case the drive gets corrupted and I have to discard it safely without the risk of <a href="https://netcentrics.com/ghosts-users-past-recovering-data-discarded-resold-salvaged-stolen-hard-drives/">anyone accessing my files</a>. Once all my media are encrypted, I can start backing up everything.</p>
<p>For the third, off-site copy, things gets a little tricky: I can either get another USB drive, back it up regularly and keep it in another place I have easy access like the office, or use the dreaded cloud-based storage. Privacy-conscious users are rightly afraid of storing personal files in 3rd-party providers, but I think there are some mitigation techniques that could still be used.</p>
<p>One option is to encrypt everything <em>before</em> sending it to the dreaded cloud. Encrypt files individually via <code>gpg</code> locally, and only then send these encrypted versions off to untrusted storage. Although this is a good solution to protect the contents of files, it does little to protect the metadata, chiefly the name and ownership of the files. So even if you encrypt your <code>homemade_porn_movie.mp4</code> file as a <code>homemade_porn_movie.mp4.gpg</code> file, an adversary could still infer interesting information from it even though the content is protected.</p>
<p>You could work around this by compressing all your files to be encrypted in one large archive (maybe version it somehow) and encrypt that archive instead, but the granularity and flexibility of accessing that data becomes much lower. This might be a significant trade-off depending on your use case.</p>
<p>Another option is to use a storage service that encrypts the media in a way that only you can decrypt it, usually via a password. This is the method that some privacy-conscious email providers like Protonmail, Confidesk or Tutanota advertise they deploy their services, and is followed by at least one storage provider: <a href="https://mega.nz">MEGA</a>. If you trust MEGA's promises of keeping your content encrypted, you don't even need to encrypt the content before sending it off - the storage medium is already encrypted. And you get 15GB free.</p>
<p>However, as with all your "cloud" services, if you don't own the machines where the data is stored, you don't control it. And there is no technical way to prevent the host to MitM the password or even adding a backdoor (like it was done with <a href="https://www.theregister.com/2020/12/08/tutanota_backdoor_court_order/">Tutanota in 2020</a>) to decrypt your content in a silent manner. You can accept, but cannot ignore this risk.</p>
<h2>Conclusion</h2>
<p>Fitting your entire life's worth of information within 128 GB of space is a revolutionary project, both in the sense of re-organizing your files and practicing a sort of <a href="https://tilde.town/~kzimmermann/articles/digital_minimalism.html">digital minimalism</a> to re-thinking your backup strategy.</p>
<p>Is reducing the amount of stuff always the right way to to go? I'm not sure, but I'm pretty sure it will help me decide better what's important, and derive a better way to store and back up my stuff with consciente. I should write back in a couple of months reporting my findings, but in the meantime, I'm pretty excited with trying this out again.</p>
<hr />
<p>What do you think about reducing everything that you own to just 128 GB of space? Could you go lower than that? How would you perform your backups in such case? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #4 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Protonmail isn't (and has never been) a silver bullet</title>
        <link href="https://tilde.town/~kzimmermann/articles/protonmail_is_not_silver_bullet.html" />
        <updated>2021-08-04T01:06:18.963136Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Protonmail isn't (and has never been) a silver bullet</h1>
<p>So apparently the "bulletproof," privacy-protecting email provider Protonmail has been hit with a legal warrant, and was forced to submit some user data to Swiss law enforcement, which in turn was handed over to American law enforcement after <a href="https://www.washingtontimes.com/news/2021/jul/28/thomas-patrick-connally-56-charged-in-federal-cour/">harassing and death threat emails were sent from their servers</a>.</p>
<p>Could it be that the once bulletproof service has broken bad and turned its back on its - apparently <a href="https://techcrunch.com/2021/05/19/__trashed-13/">50 million</a> - users? Not at all. They are simply <a href="https://protonmail.com/law-enforcement">operating as usual</a>, complying with the jurisdiction from which they operate. </p>
<p>The problem here is not that Protonmail failed, but rather that a huge number of people still think it's a one-stop shop for all their surveillance problems, a privacy silver bullet. Be it because of blind advertisement by the privacy-conscious community since its inception or <a href="https://www.wired.com/2015/08/peek-inside-mr-robots-toolbox/">the way it was showcased in Mr Robot</a>, many still misunderstand how its encryption works, and how - essentially - you're outsourcing away your privacy by simply trusting yet another middleman.</p>
<p>This is not the first time that this sort of "encryption-busting" happened either. In December 2020, Tutanota, a German provider of a similar encrypted-storage email service, was also <a href="https://www.theregister.com/2020/12/08/tutanota_backdoor_court_order/">forced to backdoor the encryption of one of its users</a> after being served a court order. Once again, the misconception of unbreakable encryption and perfect privacy by some third party provider was proved wrong. </p>
<p>There is no way to have privacy in encryption outsourced to someone else. Want real privacy? Use GPG or some other form of end-to-end encryption, where plaintext data is never made available, unless explicitly decrypted by either endpoint of a conversation.</p>
<p><img alt="how E2EE works" src="https://cdn.macrumors.com/article-new/2016/03/encrypted-protected-explanation.jpg" /></p>
<p>But don't just take my word for it: here's what <a href="https://steigerlegal.ch/2021/08/02/protonmail-daten-usa/">Martin Steiger</a>, a renowned Swiss lawyer specialized in Privacy and Data protection laws, has written about the ordeal (machine-translated from German):</p>
<blockquote>
<p>For security authorities in Switzerland, <strong>ProtonMail is a godsend</strong>, because many users wrongly believe that their data is actually protected by the "strict Swiss data protection laws" with ProtonMail. They do not know that the applicable data protection act (DSG) in Switzerland does not guarantee effective data protection and that criminal proceedings and surveillance measures are not covered by the DSG at all</p>
</blockquote>
<p>Be wary of any other online service provider that advertises itself with this model of encryption as the sales pitch. This includes not only email providers (Tutanota, Confidesk, etc), but also cloud storage providers like Kim Dotcom's <a href="https://mega.nz">MEGA</a>. Anything that isn't encrypted or decrypted locally has the potential for a backdoor in transit to render the whole encryption moot. Actually even local encryption carries its risks, but at least the chances are much, much lower, especially if the host system is kept up to date.</p>
<hr />
<p>What's your take on the security and privacy of services that follow the Protonmail model? Do you think it's an appropriate substitute for E2EE done locally? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #25 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Rediscovering Puppy Linux as Raspup on the Raspberry Pi</title>
        <link href="https://tilde.town/~kzimmermann/articles/rediscovering_puppy_linux_raspup.html" />
        <updated>2021-09-01T09:14:52.510046Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Rediscovering Puppy Linux as Raspup on the Raspberry Pi</h1>
<p>Having recently decided to restart my quest on using more of my Raspberry Pi, I found myself on the distro-hopping road again to find the perfect one that could make a nice compact desktop out of it. The marketing hype will have you believe it's a tiny but complete, full-fledged PC that will (starting with Model 4 and the 4GB+ RAM) kick butt of even big desktops, at a fraction of the cost and the power consumption. </p>
<p>I have found these claims to be very hard to materialize, starting with the question about the OS itself: what distribution can truly maximize the limited resources of this borderline embedded computer so that one can add peripherals and have the machine behave like a desktop? This is where the distrohopping trial-and-error magic comes to play, with all and any fun that this might include. What could be a lightweight enough, flexible enough and disk-preserving that could make optimal use on the Raspberry Pi?</p>
<p>Looking to answer this question the best possible way, I found myself again trying out <a href="https://puppylinux.com/">Puppy Linux</a>.</p>
<p>I'm no stranger when it comes to Puppy Linux because it was one of the first distros I've used after <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">discovering the world of Linux</a>. At the time, it really blew my mind that one could download a CD image from a website, copy it onto a USB stick and voila, an <em>entire Linux desktop was yours to command</em>, without even having to so much touch the computer's hard drive. Oh, and it was fast - <em>insanely</em> fast. Who knew that a complete RAMdisk could make even a netbook fly? </p>
<p>But even as magical and transforming as this discovery was, I had never truly stuck with Puppy Linux. Besides playtime experimentation or as a "polite" way of using other peoples' computers, it always felt too weird to become a daily driver thing. And with other "normal" distros being also quick and easy to use, but with a much more familiar approach, why even bother?</p>
<p>Last week, however, this entire concept changed once again as I dusted off my inactive Raspberry Pi 4 and thought: what if I ran Puppy 24/7 on this guy? This post outlines my discoveries and how, despite its problems and quirks, I found it to be a very candidate for a Desktop use with the Raspberry Pi.</p>
<h2>The pros and cons of Puppy on a Raspberry Pi</h2>
<p>Puppy belongs to the category of Linux distros intended to be used as a live medium as well as those intended for old computers or those with limited resources. Widely advertised in the early 2010s as a way to breathe new life in your old machines, it basically became a synonym with old PCs, although it can very well play with contemporary and powerful machines.</p>
<p>Unlike other Live Medium distros like Knoppix or Kali that store and load all the programs from the USB like a disk, Puppy loads everything it needs into RAM (using a ramdisk technology called <a href="https://en.wikipedia.org/wiki/SquashFS">squashfs</a>) just once, and eveything needed in the session will already be loaded to use. This allows for a very fast and snappy desktop experience, albeit at a small cost in RAM usage.</p>
<p>Puppy also differs from other Linux distros in the sense that it's not (at least anymore) built from ground up or based on another distro, but rather is created by <em>adapting and converting</em> other existing distributions into the Puppy way, via a tool called <a href="https://puppylinux.com/woof-ce.html">Woof</a>. This way, you can have "puppies" built from Ubuntu, Slackware and other base distributions. And conveniently enough, there is a also a flavor of it built specifically for the Raspberry Pi, called <a href="http://raspup.eezy.xyz/">Raspup</a>.</p>
<p>Although not at all my first distro in mind when it came to using the Pi as a desktop, my attention eventually shifted to Puppy because of following following points:</p>
<ul>
<li>Due to the "load-from-RAM" operations, Puppy greatly reduces the wear on the fragile SD cards used by the Pi. This will save costs and a lot of headache in the long run, as well as work around the pretty bad I/O speed on some SD cards.</li>
<li>Puppy comes preloaded with quite a lot of software by default, so time taken to do additional config (as well as the limited space in the SD card) should be minimal.</li>
<li>Unlike other minimal live systems like <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine</a>, Puppy is a <em>desktop-oriented distribution</em>, designed in every aspect to be used as one by default. This is evident by the several graphical config tools created by the developers that are simply frontends to shell scripts that do the work in the background.</li>
</ul>
<p>Puppy is far from perfect, however, and still has some warts that make me scratch my head, including:</p>
<ul>
<li>You always run as <em>root</em>. Doing this goes against every good security practice I've ever learned in Linux, and yet, here we are running an <em>entire desktop and all its programs</em> as the almighty user, risking a full-system compromise if any running program gets hacked. This includes some of the system daemons as well. I'm not sure if this is to save space or what, but it does raise a brow in most users. There is a "normal" user account named spot, but for some reason you don't use it by default, only for internet-facing things like services or browsers.</li>
<li>Puppy requires relatively more RAM to live in comparison to other minimal distros. Much like the Alpine diskless mode, if you install or download something new, it goes to RAM and not disk, via the squashfs. At an average of 600~800MB usage after configuration, it's simply too heavy for early Pis like the Model B with its 512 MB RAM. Not a problem for the Raspberry Pi 4, though.</li>
<li>As far as I know, there is no way to completely upgrade the system (i.e. kernel and all) except via downloading a new release of Puppy and burning it again to the live medium. Point updates, or even individual software updates (like a browser or file manager) can't be done despite having a package manager available (AFAIK).</li>
<li>Sometimes, having everything available as graphical programs or scripts masks the command-line tools and interfaces I rather use in my computers. Installing software with a GUI is especially painful in this regard.</li>
</ul>
<p>Despite its problems, I still went ahead and challenged myself to live on Puppy and the Pi for as much as time as possible - and survived. Let's see some highlights from this adventure, as well as some nice lessons learned.</p>
<h2>Setting up persistence</h2>
<p>First and foremost, if you are going to use Puppy Linux seriously (that is, for more than just one live session), you will want to set up <em>persistence</em> so you don't have to reconfigure your system every time you shut it down. And if we're going to daily-drive it in this Raspberry Pi, this is a must.</p>
<p>Thankfully, doing this in Raspup is easy, as this feature was supported in the original Puppy Linux family from way early. To do so, request a shutdown or reboot immediately after the OS has booted up. This might sound counterintuitive, but it's actually during the shutdown dialog that you get asked whether or not you want to save the current session. Choose to save the session, pick a name for your session's savefile, and follow the rest of the instructions. </p>
<p>By doing so, a savefile will be created in your SD card and, from now on, it will be available to choose from in the boot menu. Always boot into that savefile from now on, and you will be able to carry over the previous session with you every time. After this, your new session can be saved during its usage without having to reboot by either clicking the Save button on the desktop, or running the command <code>save2flash</code> from the shell. Your changes to the filesystem will be merged with your savefile in the SD card, and at the end of the process you will have saved another snapshot of your desktop.</p>
<p>You can now make backups of your session by simply copying the <code>session_name.sfs</code> file found in the SD card from another computer, preferrably encrypting it as well as Puppy's built-in mechanism is self-declared as not very reliable. If you lose your SD card, you can simply burn another one and copy that <code>.sfs</code> file over and start from the same point you were.</p>
<p>Just remember that there is no autosaving by default in the persistent session of Puppy. So either edit <code>/etc/eventmanager</code> to set up periodic saves, or run <code>save2flash</code> after every large transaction has taken place.</p>
<h2>Fix the SSL certificates</h2>
<p>As I briefly wrote in a <a href="https://tilde.town/~kzimmermann/updates/20210828_0407.html">previous quip</a>, for some reason all the SSL certificates in Raspup are missing in a default configuration. Their location is filled with symlinks to where they were supposed to be in, but as this location points to a partition that does not exist, it simply doesn't exist.</p>
<p>Good thing that projects like curl <a href="https://curl.haxx.se/ca/cacert.pem">maintain their own curated and updated list of root certificates</a>. Get a copy and save it somewhere under <code>/etc/ssl/certs</code>, then add the following lines to <code>/etc/profile</code> so they get activated upon login:</p>
<pre><code>export SSL_CERT_FILE="/etc/ssl/certs/cacert.pem"
export SSL_CERT_DIR="/etc/ssl/certs/"
</code></pre>
<p>Without doing this, pretty much everything that isn't a browser (for example: mail applications, irssi, CLI apps, etc) will not work when attempting a secure connection, which is extremely annoying given how almost everything depends on HTTPS these days. You need to save the session and reboot to make sure they take effect.</p>
<h2>Package management and additional software</h2>
<p>Now that your system can be backed up and carried over forward, you can safely look for and install new software that will be available for your current and future sessions. Even though Puppy has a plethora of software that covers most use requirements, you might still want to install additional stuff that you're already used to.</p>
<p>For instance, I'm not sure anyone likes the default file manager (ROX Filer), and more familiar alternatives like pcmanfm or thunar are available to be installed. Likewise, the default browser (Midori) feels clunky and lacking. This is where you can use the Puppy Package Manager (ppm) to customize your experience. The trick is not to overdo it, since due to the way squashfs works, installing additional software will increase the RAM used as well. This might not be a problem in the Raspberry Pi 4 and its 4GB of RAM, but definitely needs to be considered around the 1GB mark.</p>
<p>I have yet to see how you can install things from the command-line with it, but there's a graphical program that feels similar to the Synaptic Package manager in older versions of Ubuntu. It's a little clunky, but with some effort you can search and install anything that you want from there.</p>
<p>The big exception here, however, is browsers. The project recommends using the included browser install scripts in order to do so, as they will also configure them to be run as the underpriviledged user <code>spot</code> (browsing the web as root is a <em>very</em> bad thing to do). These scripts apparently can also update an existing browser that you previously installed, so it's pretty convenient. The scripts are named (not surprisingly):</p>
<pre><code>install_chromium_gui.sh
install_firefox_gui.sh
install_vivaldi_gui.sh
</code></pre>
<p>They don't need to be run from the terminal either, as they basically launch a GUI "wizard" guiding you through the installation.</p>
<h2>Remaining issues</h2>
<p>Everything going pretty well so far? Excellent. Now it's time to deal with the issues.</p>
<p>Even though the experience is pretty polished on the desktop for the amount of resources consumed (i.e. of course you can rice it, but can you rice it within 400MB?) I still can't quite shake the fact that we're running <em>everything</em> as root. I have never heard of anyone in particular having real security problems from this, but still, the feeling hangs around. Good thing we're not running servers with it... right? Coupled with the difficulty of patching or updating individual things short of burning a new image, this can indeed be worring.</p>
<p>In these regards, perhaps <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> is a saner alternative, especially considering that it has a very good package manager that works even over diskless mode. You do have to spend the time to get it up and running, though.</p>
<p>I did notice that there is some stuff specific to Raspup that seems to be lagging behind or broken in comparison. One example, Xorg: it seems that anytime you lock the screen with <code>xlock</code>, the session comes back with <em>something</em> missing: sometimes it's the keyboard that stops working (plug it out and in and should come back, though), sometimes applications stop being able to find the Xorg display (which often means you have to reboot to fix). This doesn't happen in vanilla Puppy (fossapup at this time), however, which makes me wonder if the base difference (Ubuntu for vanilla, Debian for Raspup) has any influence in it. Or maybe, as a main project spinoff, it just hasn't been tested as much.</p>
<p>Finally, there's the annoyance that due to the Raspberry Pi using a proprietary Broadcom video driver, you cannot ever use Redshift to ease up your eyes in the evening. Some hacky solutions exist, but ultimately the consensus is that this is still not possible in 2021. This, however, is not something specific to Puppy Linux.</p>
<h2>Conclusion</h2>
<figure>
<img src="/~kzimmermann/images/puppy_riced_mini.png" alt="a screenshot of raspup mildly riced" />
<figcaption>
Raspup 8.2 running on my Raspberry Pi 4! Too bad it doesn't recognize the distro and just puts the generic Tux logo.
</figcaption>
</figure>

<p>Puppy Linux definitely still has its place alongside the heavy hitters today, especially considering the market for old or less powerful machines. On the Raspberry Pi, the Raspup variant is a good way to get the Puppy experience in your Raspberry Pi and, if you have more than 1GB RAM, a full desktop complete with office and browsing is a reality with it.</p>
<p>As a daily driver, however, it would probably require above-average patience and comprehension to acommodate for its quirks. Namely, you have to learn to do things the Puppy way, try to make more use of the builtin tools and programs and be ready to accept some of the things that break regardless - especially if you're using Raspup instead of mainline Puppy Linux. And be aware that you're <em>always</em> root down there.</p>
<p>With that said, I will be keeping an eye out for Puppy and include it alongside my daily carry with a persistent session. Perhaps when I revisit it further down the road it will have matured into a nice daily-driver, especially for those with SD cards!</p>
<hr />
<p>Have you ever seriously used Puppy Linux (as in, more than just one live session)? How was your experience? What OS do you use in your Raspberry Pi as a desktop? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #27 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Daily-driving Debian Sid (and Devuan too)</title>
        <link href="https://tilde.town/~kzimmermann/articles/running_debian_sid.html" />
        <updated>2023-09-26T21:19:06.273894Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Daily-driving Debian Sid (and Devuan too)</h1>
<p>A few months ago, I got some revisited interest on some Debian-based distributions that I tried way back, a couple of years after I <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">first started with Linux</a>. They were rolling-release distros that sourced from Debian's unstable Branch "Sid" but had some "training wheels" in place to make it more usable and less prone to breaking: <a href="http://www.aptosid.com/">AptoSid</a> (now defunct) and <a href="https://siduction.org/">Siduction</a> (which basically branched off AptoSid).</p>
<p>Instead, I ended up doing what until recently was the unthinkable: going with <em>plain Debian Sid</em> (also <a href="https://www.devuan.org">Devuan Ceres</a> without SystemD) as my main OS, despite all "warnings" that it would be unstable, breaking often or, to some, just plain unusable after enough time. And yet, here I am after months of continuous usage and I can say: Sid is <em>very</em> usable after all.</p>
<p>This post will outline some of the things that I learned in daily-driving Debian unstable and how you too can get started if you have a little more experience in Linux.</p>
<h2>Background to the task</h2>
<p>Both of the Sid-based distros have a special place in my heart because they gave me what basically was the first taste of the rolling-release model of Arch Linux into the familiar environment of Debian. In my early distrohopping days, it was the bomb. I'd <code>apt-get dist-upgrade</code>'d it every few hours and bam, fresh packages would come to my computer. Eventually, I grew out of it and went to Debian Stable for a few years.</p>
<p>Fast forward to now, if it worked 10 years ago, I thought, why wouldn't it do so again, right? You know the drill. Go to the website, find download media, burn the ISO to USB stick, boot and let the magic happen. Except that, for some esoteric reason, this didn't work this time. Shucks. I roughly remembered that Aptosid's instructions explicitly called for using low "burning" speeds due to compression issues, but couldn't understand why it wouldn't this time. </p>
<p>At that point, though, it didn't matter either: someone in fedi had already suggested the question of "why not plain Sid?" And it was a good question indeed - what so important additions to Sid did such distros add that you wouldn't find in, say, plain Debian or Ubuntu? Touché. Before I could be bothered to look up Siduction's forums, there I was looking for Debian Sid instead. I mean, at that point I already had <a href="https://kzimmermann.0x.no/updates/20230131_1415.html">more than a decade using Linux</a> and if something went wrong I could probably debug it.</p>
<p>And so, before I knew it, there I was, going to download the ISO from the Debian website. But where was it?</p>
<h2>Lesson 1: you don't "install" Sid, you update to it.</h2>
<p>Turns out you usually don't "install" Debian unstable in the traditional way. There is a common issue in rolling-release distributions concerning installation media: how do you make a snapshot of something that is always changing? Too fresh the snapshop, you might ship broken things. Too old, you might have trouble when you update later.</p>
<p>Arch Linux famously makes a few snapshot ISOs available per year to address this in their own way, and it works well. But Debian? Nope. It makes no mention of Sid ISOs anywhere in their site, at least not on its main downloads pages for reference. So how do we install it?</p>
<p>If you look for it hard enough, you'll eventually find some obscure page where some daily Sid snapshots of the base system are posted, but those are meant mostly for virtual machines and "cloud" instances. And when I say look <em>hard</em>, I mean hard - I had to search a couple of times for <code>debian sid ISO snpshot download</code> or similar to find it out. So, clearly, this shouldn't be the preferred way.</p>
<p>What you should do instead is <em>update</em> to Sid, <em>starting from Debian Stable</em> (though I believe Testing should work too). That's right: start off by installing a minimal Debian Stable distribution (currently called <em>Bookworm</em>) with just enough packages for a base install and <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">console</a> access, then upgrade it to Sid. After the update finishes, reboot and then you can install everything else as per your needs.</p>
<p>In other words, edit your <code>/etc/apt/sources.list</code> file and change this:</p>
<pre><code>deb http://ftp.fr.debian.org/debian/ bookworm main non-free-firmware
deb-src http://ftp.fr.debian.org/debian/ bookworm main non-free-firmware
</code></pre>
<p>To point to Sid (or Ceres, if using <a href="https://www.devuan.org/">Devuan</a>):</p>
<pre><code>deb http://ftp.fr.debian.org/debian/ sid main non-free-firmware
deb-src http://ftp.fr.debian.org/debian/ sid main non-free-firmware
</code></pre>
<p>One important point: Unstable does <em>not</em> get security fixes the same way as stable, which is "patched" by the security team. Instead, any security patches will be done from upstream, straight from the original developers. This means that this line must be deleted or commented out:</p>
<pre><code># No security updates for sid
# deb http://security.debian.org/debian-security bookworm-security main non-free-firmware
# deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware
</code></pre>
<p>And also any lines referring to intermediate updates or backports and other things applicable only to stable. Ex:</p>
<pre><code># bookworm-updates, to get updates before a point release is made;
# see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
# deb http://ftp.fr.debian.org/debian/ bookworm-updates main non-free-firmware
# deb-src http://ftp.fr.debian.org/debian/ bookworm-updates main non-free-firmware
</code></pre>
<p>Then reload APT's cache and update your packages:</p>
<pre><code># apt update
# apt upgrade
</code></pre>
<p>At the end of the process reboot, and congratulations: you're running Debian Unstable!</p>
<p><img alt="matrix &quot;whoa&quot; scene" src="https://i.pinimg.com/originals/7f/d1/ed/7fd1edc4fe92b9ced76e1f30ce90121c.gif" /></p>
<h2>Lesson #2: update safely!</h2>
<p>So you're now running Debian unstable, you're cutting edge again! What's the next big step? Install your other packages that you use, of course. But before you go ahead with your autoconfig script or something, there's one thing that, while completely optional, I feel is <em>very</em> important to do on Sid: <strong>install <code>apt-listbugs</code></strong>.</p>
<p>What this package contains is an automated installation checker that hooks on to APT and searches the Debian bug trackers for known bugs that may break the software you're about to install or update. It's completely automatic once you install it, adds less than a second more to your update routine and the best part: works for both Debian or Devuan!</p>
<p>Thus, install that package first, and then you're home free to customize and update your system at will. Despite being called "unstable" and so many people saying that it breaks often and not recommended for <code>n00bs</code>, I frankly have not found it <em>any</em> more difficult to use than, say, Arch Linux, or even Debian stable itself. Packages seldom broke here. The package names are the same. It feels the same!</p>
<p>However, during an upgrade eventually you might run into some warnings of "grave bugs" posted in the Debian advisory board, like these:</p>
<pre><code>...
After this operation, 497 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
serious bugs of libasound2 (1.2.9-2 → 1.2.10-1) &lt;Forwarded&gt;
 b1 - #1051901 - libasound2: 1.2.10 breaks ability to play audio using i386 binaries  on amd64 host
Summary:
 libasound2(1 bug)
Are you sure you want to install/upgrade the above packages? [Y/n/?/...]
</code></pre>
<p><code>apt-listbugs</code> has caught a bug and is protecting you from it. This might sound scary, but is a great opportunity to stop and review what are you about to install. Are these "serious bugs" <em>really</em> going to affect my system? Probably not for me, as I don't use i386 packages in my machines, so I'll just take the risk and type <code>y</code>. But the bugs could've been worse, affecting more critical system components like GRUB. If that was the case, I'd halt the updates for a few more days or weeks.</p>
<p>I also have another recommendation, which might be a little more controversial among other Sid users: <strong>never full-upgrade</strong>.</p>
<p>If you use Debian (any release) for long enough, you'll notice that after a while <code>apt</code> will start showing some packages as "held back," and those won't be updated with a simple <code>upgrade</code> command. To get those updated, you will need to <code>full-upgrade</code> (or <code>dist-upgrade</code> if using <code>apt-get</code>). On Stable, this is usually the case for kernel updates, and issuing full-upgrade is harmless, just requiring a reboot later.</p>
<p>On Sid, however, all sorts of packages can be held back, from drivers to LibreOffice. And though the temptation to just roll out the updates is great, this is where I tell you to practice patience and don't do it. This is because those packages in Sid are marked so as a sort of staging area, for when a release is not too sure that it would be 100% safe to use. Forcing them into the mix anyway would be akin to admitting the "serious bugs" case listed before. And yes, you <em>can</em> wait a little for the shiny thing. You're not on bleeding edge all the time, not even on Arch anyway - yes, even Arch has a <a href="https://joshtronic.com/2019/08/19/how-to-install-packages-from-testing-on-arch-linux/">testing branch</a>, did you know?</p>
<p>So, <strong>TL;DR:</strong> install <code>apt-listbugs</code> before you install any other packages, and resist the urge to force upgrades. While you're it, it'd be good to also install <code>apt-listchanges</code> (good to know drastic changes in some of the packages you have) and also ditch <code>apt-get</code> altogether for the recommended plain <code>apt</code> utility - it's much more advanced, anyway.</p>
<h2>Lesson #3: daily driving Debian Sid</h2>
<p>And so, just like that, you too now are a regular user of the feared Debian Unstable distribution. How does <em>that</em> feel, huh?</p>
<p><img alt="Matrix: I know kung fu!" src="https://i.pinimg.com/736x/a6/60/d9/a660d91f3899b3fde06cd92a1895577c.jpg" /></p>
<p>To be honest, my experience with Unstable so far (both Ceres and Sid) has been quite "boring" in terms of news and the user experience quite the same as Stable. Packages are new and fresh, comparable to other rolling releases, but breakage and problems remain rare, truly a stable of Debian. Still, there were a few specific cases where Unstable showed some differences.</p>
<p>The first is that, right off the bat, most third-party maintained repositorires will <em>not</em> work anymore. I'm talking about those repos maintained by volunteers other than the Debian team that you can add manually to your sources.list file. Sadly, it's very likely that they won't work anymore, and that's because the vast majority of them only issues packages built against the <em>Stable</em> release, and nothing more. That's a bummer. </p>
<p>This means a lot of convenient things like <a href="https://tilde.town/~kzimmermann/articles/getting_doom_right_alpine_freebsd.html">GZDoom</a> builds and the command-line Mastodon client <a href="https://github.com/RasmusLindroth/tut">tut</a> won't be available anymore for you. You can still work around these sometimes by getting a prebuilt binary (case of tut - even statically linked!) or in last case just building straight from source. Whatever you can build in Stable should still build in unstable too, using the same libraries and all.</p>
<p>The oher important detail is that sometimes updates will still break something - even with <code>apt-listbugs</code> enabled. Wait, seriously? Yup, quite surprising indeed. I have not quite figured out why, but this is my hunch: <code>apt-listbugs</code> will perform checks for bugs when new packages are due to be installed <em>but not for already installed ones!</em> At least this is the only reason I can think of, since the things that broke had been working before (and got fixed a while later, too).</p>
<p>However, don't let this prospect scare you away from the experience of using Sid / Ceres - in my experience so far, this is a <em>very</em> rare occurrence! The only occasions that this happened to me were the following:</p>
<ul>
<li>Some Xorg driver broke between some updates, leaving the window managers unresponsive to mouse / keyboard inputs. This was scary at first, but could be fixed by downgrading the specific package containing it to a last working version. A week later or so, a new version came out with a fix.</li>
<li>Chromium (which I sadly have to use for work) broke and would segfault immediately after a certain update. My workaround was to use Firefox or a Phone for MS Teams. Again, a few days later it was fixed.</li>
</ul>
<p>So yes, things break - but also they get fixed fast as updates roll in frequently. Isn't that marvellous?</p>
<h2>Conclusion: is Sid for the faint of heart?</h2>
<p><img alt="Sid from Toy Story" src="https://i.pinimg.com/736x/bb/14/9e/bb149e57b6cb502bbda6e633372e217d.jpg" /></p>
<p>It turns out that daily-driving Debian unstable isn't that scary an experience at last. And that's a great thing - after all, who could possibly develop software on, and let alone use, something that breaks all the time? And yes: this is the real value of it - having an up to date software base that you can develop, test, and pass the successful stuff to the next ones in line. We've got the precious expertise of all the people who develop Free Software to thank for that untested stability.</p>
<p>That said, would I recommend Sid / Ceres to a beginner? To a person not familiar with some basic system maintenance? To my grandparents? Not at all - there are many good alternatives for beginners out there that carry significant less risk, one of them being <a href="https://tilde.town/~kzimmermann/articles/wrestling_with_locked_machine.html">Linux Mint</a>. Those distributions offer a good balance between stability, ease of use and fairly up-to-date software, and that is good enough.</p>
<p>As a whole, I would not recommend it to anyone who isn't comfortable getting to the command-line and doing some work, either to update or to fix things. This does not mean that you should be <code>1337h4x</code> or <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">live on the terminal</a>, but you should have familiarity with the text environment and be ready to look for more information when things happen. If you have already used another rolling-release distribution, things will be very familiar to you here.</p>
<p>Finally, a note about whether Debian or Devuan: I felt there's very little difference in usability between the two in the end. Both are solid and update with about as much frequency (and break about as often too). Devuan offers OpenRC as one of its (many, many) init systems, which is solid and I had a lot of familiarity with by running <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> and <a href="https://tilde.town/~kzimmermann/articles/saving_artix_install.html">Artix</a> before that. The only difference I could really feel was, out of all the places, in a trick of the graphical file manager: mounting remote shares via SFTP.</p>
<p>I had always been able to mount remote drives in Debian and its derivatives with PCManFM (my file manager of choice since 2012 or so) straight away by simply specifying <code>sftp://my_ssh_address:/some/mount</code> in the address bar and pressing Enter. This also worked in Artix and some other distros, which led me to assume that this could be a beginners' thing. However, this does <em>not</em> work on Devuan - at all. Don't ask, I've tried zillions of combinations and starting services like <code>udisks2</code> but it just wouldn't work. No idea why. At least there's the <a href="https://kzimmermann.0x.no/updates/20230502_2019.html">universal method of manually mounting the drive</a> that still works.</p>
<p>So there you have it - running Debian Sid. It isn't so scary, but you'll probably want to prepare yourself with some experience before diving in.</p>
<hr />
<p>Have you ever ran Debian / Devuan unstable bare metal for a long period of time? How was your experience? Let me know in the <a href="https://fosstodon.org/@kzimmermann">Fediverse!</a></p>
<hr />
<p>This post is number #44 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Lessons learned from saving my Artix Linux install</title>
        <link href="https://tilde.town/~kzimmermann/articles/saving_artix_install.html" />
        <updated>2021-07-23T03:05:50.475210Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Lessons learned from saving my Artix Linux install</h1>
<p>A few days ago, my otherwise perfectly smooth experience of running <a href="https://artixlinux.org">Artix Linux</a> ran into a small moment of Panic. After running a routine update with Pacman, I noticed that some errors were thrown in the end of the script, but didn't really think there would be too much of a problem. I was proven wrong a few hours later, when I restarted my computer and found out that it wouldn't boot anymore.</p>
<p>I can't say that I hadn't prepared for this moment - rolling around with an Arch-based distribution has its risks of something breaking at some given time, and I had been making some backups thanks to my <a href="https://tilde.town/~kzimmermann/articles/project_128.html">Project 128</a> adventures. However, it's moments like these, where you feel in your skin all that stability and confort from GNU/Linux melting away, that give you a silent panic. <em>Fuck, it's not booting.</em> And rebooting or trying anything else doesn't seem to help at all. What now?</p>
<h2>Mini-disaster recovery starts</h2>
<p>After sleeping on the issue to try to regain my sanity on the following day, I followed through with what I know best: booting a Live Medium. I had to choose one to do my work, and the next question was: which one? I thought about my go-to Swiss-army knife distribution that is Puppy Linux, but some of its functions "feel" a little weird in comparison, especially if I was to do some critical work to recover my system. So, as an alternative, I resorted to an older live medium distro that I knew, but hadn't used in a while: <a href="https://www.bunsenlabs.org/">BunsenLabs Linux</a>.</p>
<p>This worked for me to get to decrypt the drive and check the integrity of my data: everything was still there and not corrupted. Big first whew! I was still left with the task of actually fixing the install, however, so the mission was only beginning. Searching around for the answer from the messages in the logfiles returned mixed results. </p>
<p>Something pointed towards a kernel issue, but I didn't remember updating the kernel recently - when I do see the kernel has been updated, this usually prompts me to reboot the machine. But the "lack of space" error messages I had been seeing did point to an issue to the tiny <code>/boot</code> partition on my machine. And almost everywhere I looked the common point was that I would never have to do a full reinstall, unless the system had fscked up <em>real</em> bad.</p>
<p>Eventually, I found <a href="https://gaumala.com/posts/2020-11-06-arch-linux-wont-boot-now-what.html">this post on a blog</a> describing the troubleshooting of common Arch Linux problems, which seemed to be the same symptoms that I had. In addition to troubleshooting the kernel issues, the post contains another hint: the init environment.</p>
<p>I'm not 100% technically sure, but it looks like after some kernel updates sometimes the post-boot init environment (the <code>initramfs</code>) doesn't get re-built correctly, and this causes the boot to fail despite the kernel getting loaded clean. And thankfully, the fix isn't hard to implement.</p>
<h2>Folk documentation saves the day</h2>
<p>If you ever find yourself stuck after an Arch Linux kernel upgrade, but notice that your <code>/boot/</code> directory isn't empty and kernels have been upgrade, here's the steps to fix it:</p>
<ol>
<li>Boot into a live Arch Linux environment (the install ISO image)</li>
<li>Mount your hard drive containing your Arch install (use <code>cryptsetup open DEVICE someidentifier</code> to decrypt it first if you're using full disk encryption like me)</li>
<li><code>chroot</code> into your mounted disk</li>
<li>Once chrooted, run <code>mkinitcpio -p linux</code>.</li>
</ol>
<p>I tried to run these from my live BunsenLabs environment, but couldn't get it to work due to mkinitcpio complaining about <code>/proc</code> not being mounted. I guess Arch Linux's <code>arch-chroot</code> works around these by filling out the gaps of the standard chroot environment, so the live medium can really impersonate the full install from the Arch perspective.</p>
<p>At the end of the procedure, the initramfs will be rebuilt for the Linux kernel and if it's successful, you can reboot into your original install. Crisis averted, no need to do clean reinstall. You can resume work correctly. Thanks, random stranger that authored this documentation!</p>
<h2>Good lessons from a bad incident</h2>
<p>Downtime is never a good thing, and when you realize you don't know when did you last make a backup, it can be terrifying. Thankfully, this story had a good ending, and actually I learned a couple of good things from this bad incident and made me better prepared to when it happens again in the future. And since we're talking about Disaster Recovery and I've been reading a little more about it lately, the RTO was about of about 1 day with the RPO of about the same. Hardly something impressive in comparison to Enterprise levels, but for a one-man mission, it was ok.</p>
<p>The first and foremost good thing was that this gave me the opportunity to test-drive BunsenLabs once again. I had used Crunchbang Linux (the BunsenLabs predecessor) aeons ago, back when <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">I was still discovering Linux</a>, and at the time it was a perfect match for my netbook. Getting to try this re-vamped edition in another low-specs computer was very satisfying. Bunsenlabs is a great live-medium OS, and offers great support for system maintenance, and is more familiar to me than Puppy (since it's a direct Debian derivative). I will be carrying it in my emergency USB from now on!</p>
<p>The second good thing was that this incident forced me to do another backup of all my data. Not that I had lost anything, but out of the possibility of full reinstall, I took a full backup again. And this has really forced me to think about rsync'ing more frequently, perhaps every week or three days, and to keep an eye out for my third off-site copy.</p>
<p>The third good thing was that this incident gave me an insight on how to operate with a seemingly doomed Arch Install, even when the disk is encrypted, thanks to the tools available in the Arch live ISO. Who knew that you could have so much power and flexibility even on such a minimalistic environment? I also noticed it runs on ZSH rather than bash, and it was a pretty nice shell too. I might try it more thoroughly and switch later.</p>
<hr />
<p>So there you have it! A seemingly disastrous situation turned into a good learning opportunity with no side effects to my data. Have you ever run into an unbootable system situation previously? How did you recover your install afterwards? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #21 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Saving your offline server with... irssi!</title>
        <link href="https://tilde.town/~kzimmermann/articles/saving_server_irssi.html" />
        <updated>2023-01-21T12:19:30.884342Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Saving your offline server with... irssi!</h1>
<p>After announcing my <a href="https://tilde.town/~kzimmermann/articles/bringbackblogs-new-years-res.html">Bring Back Blogs! challenge</a>, my self-hosted server went offline. This couldn't have been any more ironical, because my announcement did include the personal tech goals that I was intending to lay down, one of them being an increase in self-hosting and reducing my reliance in third-party services. Oops! What a way to start the year, huh? <code>'^_^</code></p>
<p>To aggravate the situation, I was a few thousand kilometers away from my house - and, thus, my server - at the time, and did not have plans to go back until a much later time to try to fix physical issues had they happened. Uh-oh. Thus, I was left with a disappointing trade-off: either wait out and suffer the downtime of weeks without posting, or use my tilde.town backup mirror to keep up the blogging. </p>
<p>Both were quite expensive, turned out, because my server also carried other services that needed, including part of my backups. I had to think about something else to try. </p>
<p>Surprisingly, though, I was back up and running the day after the incident, and perhaps even more surprisingly, managed to do so with quite an obscure solution: the irssi IRC client!</p>
<p>This is more or less how my procedure went.</p>
<h2>Tor didn't work</h2>
<p>So the main line is down and I can't ping or do anything with it. Don't I have backup routes or something?</p>
<p>Oh yes, I do: way before I started hosting services available over the internet, way before I could configure my ISP's modem to allow a NAT passthrough, I had first self-hosted via a hidden service! This was my first trick to access my files and git repos away from home and, even though it took some time, it worked well.</p>
<p>I figured that what probably had happened was that my ISP's router rebooted or suffered a hiccup of sorts, which led the IP address to change, and thus the domain to point to something nonexistent. Nope, I never read about BIND to properly get it configured. But that would be easy to fix, all I would have to do was:</p>
<ul>
<li>SSH into my server via the Tor hidden service (you can either configure a <code>ProxyCommand</code> line to chain the request, or run <code>torsocks ssh myhiddenserver</code> straight).</li>
<li>Find out my new public IP address (a DuckDuckGo search for 'ip' gives you the answer straight away) </li>
<li>Update the DNS record (free DNS servers can be "pinged" to do this automatically)</li>
<li>And maybe put a cron job to update it every 6 hours or so.</li>
</ul>
<p>Easy-peasy, but for some odd reason, I couldn't SSH into the hidden service.  It simply appeared to be down altogether. Adding the <code>-vv</code> flags wouldn't reveal any more information about the demise of my server, and just like that my backup line was dead. No route in sight, apparently. At this moment I began to wonder what could've possible happened to bring the server down: power failure? Hard drive crash? Some software crash that locked up the system? DOS attack?</p>
<h2>Irssi to the rescue!</h2>
<p>I was about to give up and live with the downtime when I noticed something almost by chance, in what was probably the last place I'd look: the tmux pane of my terminal that was running irssi, the IRC client. Before we proceed, though, some background:</p>
<p>See, I have this little trick where I make a channel in a large IRC network and have my server join there to "hold the room" in sort of way, as well as all other devices that I am using. Having all my devices accessing this channel allows me have a shared clipboard through which I can pass interesting links and other text snippets across them in a quick way (if something is sensitive that requires encryption, I can do something like <a href="https://tilde.town/~kzimmermann/updates/20220418_0951.html">this strategy with CyberChef</a>). </p>
<p>Most importantly, however, by having my server holding the channel all the time, if a specific device gets disconnected without receiving a message, I can simply access the server's user and replay the message, since the server is the one "guaranteed" to see all the messages.</p>
<p>So, having glanced at the irssi window, my thoughts suddenly went like: "wait a second, could be that my server is still connected to IRC?" I switched channels to look and, surprisingly, I saw the nick from my server right there, standing tall. So there was no internet connection problem, after all! It was a much lesser problem in the end.</p>
<p>If the server is up, why can't I access it via Tor? Very good question that I still can't answer. If only I could find that nick's IP address from its connection to the IRC server! And thankfully, there's a way to do that. Run the command <code>/whois $nick</code> into the input box and you'll find a lot of information that is exposed to the IRC server and, if you're lucky, the host IP address is included if not masked or obfuscated somehow. Mine wasn't, and sure enough, I found my new public IP address from there.</p>
<p>Final test: do a raw <code>ssh -i ~/.ssh/mykey user@ipaddress</code> to see if it works and sure enough it did. Victory. Next, all I had to do was run the script that updates DNS records and my server was rendered online again.</p>
<h2>Lessons learned</h2>
<p>Whew! It's always easier to see via hindsight, but I missed out on an opportunity to use automation to let the problem solve itself. My DNS updating could've been done in an automatic way by the means of a cron job running every 3 hours or so. </p>
<p>You could try making the job more frequent to reduce the window of downtime, but some free DNS servers state that updating too quickly is against their terms of use and could boot you out. For that, you can make a script that checks first if the address has changed before actually requesting the update. This curl snippet, for example, will return your public IP address:</p>
<pre><code>curl --silent https://lite.duckduckgo.com/lite?q=ip | awk '/Your IP address is/ { print $5 }'
</code></pre>
<p>Yay for scraping!</p>
<p>In a more serious way, you might actually want to study and set up something like BIND to properly synchronize your IP and domain in a less hacky way. I didn't do it, and frankly feel quite lazy to do so, but it's the way to go to make self-hosting much more robust, especially if your router supports it!</p>
<p>Finally, who would think that IRC would save the day in <em>this</em> sort of way, right? This will definitely be a trick I'll keep up the sleeve for the rest of my self-hosting adventures, but also shows a big warning about privacy: your IP and ISP information is exposed to the IRC users unless you cloak it or obfuscate it in some other manner. Be careful out there if you need to be anonymous in a channel!</p>
<hr />
<p>Have you ever used an unorthodox method to "fix" your server like this one? How did it go? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post is number #41 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Self-hosting a Git service: an easy way to more personal freedom</title>
        <link href="https://tilde.town/~kzimmermann/articles/self_hosting_git_server.html" />
        <updated>2021-01-28T06:43:35.440972Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Self-hosting a Git service: an easy way to more personal freedom</h1>
<p>I'm a huge fan of <strong>self-hosting</strong>, which is the act of hosting the services you need and use by yourself, rather than use it from a third-party such as Google or Amazon. These "free" service providers often don't have your best interests in mind and finance themselves by brokering your personal information to interested advertisers, which is really a stab in the back for anyone looking for privacy. </p>
<p>When you self-host something, however, you take all responsibility and ownership to yourself, and as a result can have much larger control and flexibility on what you can do. I've done it extensively in the past (sadly, I currently can't for most of these services) and even posted a video on <a href="https://diode.zone/video-channels/kzimmermann_podcast">my Podcast</a> talking about the subject:</p>
<p>Out of all the services I've self-hosted before (website, XMPP service, NAS, etc), <a href="https://git-scm.com/">Git</a> probably would be the last one I'd even start to consider, since I always wanted my software to be as openly and freely shareable with as many people as possible, and there are so many code hosting out there with so many interesting collaboration features like <a href="https://notabug.org/kzimmermann">Notabug</a> or <a href="https://codeberg.org">Codeberg</a>. </p>
<p>However, recently I wanted to have an easy and quick way to sync my essays (such as this very one) as well as some code snippets and software projects across machines in my house, and didn't want to refer to a publicly accessible server on the internet just for this matter. Short of installing some sort of <code>rsync</code> solution, I decided to starting researching how hard it would be to host my own git server. And it was <em>much easier than I thought</em>.</p>
<h2>Sizing the perfect solution for your case</h2>
<p>For starters, I started looking into the existing hostable Git servers across the many repositories that can accessed on the internet. <a href="https://about.gitlab.com/install/">GitLab</a> can be self-hosted on your own premises. There is also <a href="https://gitea.io/en-us/">Gitea</a>, a relatively new player. Notabug uses <a href="https://gogs.io/">Gogs</a>.</p>
<p>I'm pretty sure these are all fantastic alternatives and can do the job pretty well. However, they all seemed a little too big for my simple use case of sharing and versioning code snippets and essays alongside myself only. And some solutions like GitLab can be <a href="https://docs.gitlab.com/ee/install/requirements.html">pretty heavy</a> (4GB RAM minimum, four-core CPU etc for a minimal deployment). These seem a little bit overkill for my rather simple use-case.</p>
<p>So what did I go for eventually? Just a barebones git server sitting on my raspberry pi. No web interface, no fancy database backend, just versioning and synchronization among a few machines in my network for as little resources as possible. Does that sound like your goal as well? Then follow on for how I did it.</p>
<h2>Setting up the server</h2>
<p>I was lucky to have found <a href="https://linuxize.com/post/how-to-setup-a-git-server/">this comprehensive guide in Linuxize</a> that was my main source of instructions in this task. The goal was not to have a full-fledged git server with many users and a neat web interface, but rather a simple, compact and light on resources solution using whatever I already had. Thankfully, the Linuxize guide fulfilled every aspect of it.</p>
<p>The solution involved a very clever use of commonly available server software like <code>sshd</code>, which is used for both authentication and transporting of the content between the server and the client, much in the same way that <a href="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol"><code>sftp</code></a> works without being much of an "ftp" itself. The setup works with the following steps:</p>
<ol>
<li>Generate a new SSH RSA keypair to be used exclusively for git in the client computer.</li>
<li>Create a new user named "git" on the server, which will be doing all the updating and merging of the repositories.</li>
<li>Configure SSH in the client to allow you to log in as the git user in the server with your public SSH key.</li>
<li>Create a repository and commit to it.</li>
</ol>
<h2>Step 1 - Create a new SSH keypair</h2>
<p>This is the basic requirement for everyone that needs to do ssh the proper way (no passwords, only credentials). We begin by creating a new ssh RSA keypair with the <code>ssh-keygen</code> command:</p>
<pre><code>ssh-keygen -t rsa -b 4096 -C "email@domain.tld"
</code></pre>
<p>Your <code>email</code> can actually be anything, since it's a comment bit, but I'd recommend typing something that would identify that keypair as specifically for git, and having nothing to do with your other SSH stuff.</p>
<p>Once your keys are created (private key and a public key with a .pub extension), I recommend moving them both to your <code>~/.ssh</code> directory for neatness:</p>
<pre><code>mv id_rsa id_rsa.pub ~/.ssh
# for security, make sure nobody else can see this directory.
chmod 700 ~/.ssh
</code></pre>
<p>You'll use the public key later, so keep in mind where it is for now.</p>
<h2>Step 2 - Set up the git user in the server</h2>
<p>On the server side, create a new user called <code>git</code> with no password via this command:</p>
<pre><code>sudo useradd -r -m -U -d /home/git -s /bin/bash git
</code></pre>
<p>At first, this creation command sounds a little strange: how useful is it to have a user that does not even has a password? But the answer relates to security; that git user cannot be brute-forced remotely by anyone because there is no password, leaving only a <a href="https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server">credential-based remote login</a> as the option (which is considered much safer).</p>
<p>Now is also the time to also install <code>git</code> if your system doesn't have it (unlikely, but has to be checked):</p>
<pre><code>sudo apt install git # for debian-based systems
</code></pre>
<p>And with this, the server preparations are complete. <em>Yes, really.</em> We will be returning to the server side too, but in the meantime, let's go back to the client side.</p>
<h2>Step 3 - Configuring SSH on the client</h2>
<p>You now have to configure ssh on the client to be able to log in and perform work on the server as the git user you created on Step 2. This step is missing from the Linuxize guide, and actually gave me a bit of trouble until I figured it out. So let's get to it.</p>
<p>To allow the remote logging in via ssh, first append the contents of your public key that you generated in Step 1 to a file named <code>~/.ssh/authorized_keys</code> on the server, like this:</p>
<pre><code># copy id_rsa.pub from client to server, then on the server:
cat id_rsa.pub &gt;&gt; ~/.ssh/authorized_keys
</code></pre>
<p>Now create a file on the client side called <code>~/.ssh/config</code> and write the following content to it:</p>
<pre><code>Host gitserver
    Hostname your_server's_ip_address
    User git
    IdentityFile /home/your_username/.ssh/id_rsa # or whatever you called your Private Key.
</code></pre>
<p>With that entry, you can now log into your server as the git user by issuing <code>ssh git@gitserver</code> or even just <code>ssh gitserver</code>. You can name the hostname as you like.</p>
<p>Now the final step is to create your first repository and commit to it.</p>
<h2>Step 4 - Create a git repository and contribute to it.</h2>
<p>Since SSH has been configured, we can now log into the server as the git user:</p>
<pre><code>ssh git@gitserver
</code></pre>
<p>To create your first repository, run in the server:</p>
<pre><code>git init --bare my_repo.git
</code></pre>
<p>Exit the shell (press Ctrl+D) and now on the client machine, clone that repository: </p>
<pre><code>git clone git@gitserver:my_repo.git
</code></pre>
<p>Git might alert you that you've cloned an empty repository, but that's not a problem, since you're going to start contributing now. </p>
<p>Enter that directory and create some files:</p>
<pre><code>cd my_repo
touch myfirstfile
</code></pre>
<p>Now commit that file normally like you would in any other git server"</p>
<pre><code>git add \*
git commit -m "Testing my local git server"
git push -u origin master
</code></pre>
<p>And voilà: your changes have been pushed to your git server! If you clone this repository from any machine now, that file you just committed will be there too.</p>
<p>If you want to include another user or machine as part of the "committers" of the repository, repeat steps 1 and 3 in the desired machine / user. You'll be able to start contributing normally from there as well.</p>
<h2>Conclusions</h2>
<p>So there you have it, you've successfully self-hosted your first service across your local network as a git server. Admittedly, this was much easier than it looks like at first, and for small private projects, it works very nicely. This also shows how flexible ssh is as a technology: it eliminated all the need to set up an HTTPS certificate and everything just to pass content along the network, in a manner similar to sFTP.</p>
<p>You can also use this functionality to "sync" small stuff as well between machines, but keep in mind that git was designed to handle text content primarily and that changed often (large static binaries like video would not be very useful).</p>
<p>Is this scalable? Probably not very much, if you want to make public your work, you're probably better off using something like Gogs or Gitea. But still it's possible to ditch Github and go completely local just for your private stuff.</p>
<p>Have you ever tried self-hosting your own git server before? How did you do it? Let me know in <a href="https://fosstodon.org/kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post marks my first installment into the <a href="https://100daystooffload.com/">#100DaysToOffload</a> challenge, a blogging challenge first issued by Kev Quirk. Let's see how this will turn out!</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Please notice you've been pwned, Senpai.</title>
        <link href="https://tilde.town/~kzimmermann/articles/senpai_you_have_been_pwned.html" />
        <updated>2021-06-08T02:05:26.962267Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Please notice you've been pwned, Senpai.</h1>
<p>What a great way to start your Saturday morning. </p>
<p>You receive an email written in Japanese alongside a recent surge in Japanese-written spam in your inbox, but this one seems to be legit. At a first glance, your infant Japanese language skills catches a few words, like "illegal," or "unauthorized," and you start thinking that this might be serious. You paste the contents into Google Translate and realize that part of your data is about to go front pange on some Infosec journal:</p>
<hr />
<p><strong>Subject:</strong> Apology and notice regarding damage caused by unauthorized access to the website</p>
<p>This e-mail is sent to customers whose e-mail address may have been leaked due to the following unauthorized access damage.</p>
<p>[REDACTED] Co. may have leaked some email addresses of members or temporary member registrants owned by the Company due to unauthorized access to the Company's website. I confirmed that there is.</p>
<p>The information that may have been leaked this time does not include name, address, date of birth, or credit card information.
Regarding this matter, we would like to report on the current situation and future measures as follows.</p>
<ol>
<li>
<p>Status of unauthorized access: unauthorized access from the outside using SQL injection was confirmed, and as a result of the investigation, it was found that some e-mail addresses of [REDACTED] members or temporary member registrants may have been leaked outside the company. (SQL injection: An unauthorized access method that uses a SQL statement as a URL parameter to extract information from an unintended database)</p>
</li>
<li>
<p>Information confirmed to be leaked</p>
<ul>
<li>Number of cases that may have leaked: 46,421</li>
<li>Information that may have been leaked: Email address (Name, address, date of birth, and credit card information are not included).</li>
</ul>
</li>
<li>
<p>Response and countermeasures: After taking protective measures against attacks on this page, we have implemented the following.</p>
<ul>
<li>[REDACTED] reexamines the safety of [REDACTED] website</li>
<li>[REDACTED] strengthens [REDACTED] website development management system</li>
<li>Consultation with the Police Department regarding this matter</li>
</ul>
</li>
<li>
<p>To our customers: We will never ask customers who may leak information for their personal information (financial institution account, credit card PIN, My Number, etc.) by telephone, mail, email, etc. To prevent damage, please be careful about suspicious e-mails, such as refraining from opening e-mails and attached files.</p>
</li>
</ol>
<p>We sincerely apologize for causing a great deal of inconvenience and concern to our customers and related parties. In the future, we will strengthen the security of the server system and homepage and make thorough efforts to prevent recurrence.</p>
<hr />
<h2>Gomen-nasai</h2>
<p><img alt="You've Been Pwned, desu!" src="/~kzimmermann/images/youve-been-pwned-desu.jpg" /></p>
<p>What can I say... quite a lot of formalities in there for a single email, very Japanese style indeed. I love how accurately they point out that <em>exactly</em> 46,421 accounts were compromised - that in itself tells us how the Japanese like to be correct. Also, interesting how they disclose the cause in full foreground as SQL Injection, complete with a short layman's terms explanation. Honest transparency or weakness on their part? I'll leave the answer to you.</p>
<p>This experience marks the confirmed first time I confirmed that I had been pwned in my digital life. This is not to say that previous services I've used, like old email addresses or forums, were never affected, but rather that this is the first one I'm actually aware. The struggle is real, my friends, and OpSec is a real must to anybody.</p>
<p>Naturally, the question shifts to the future: what am I going to do next? Or perhaps, what <em>can</em> I do next in this case that an irresponsible company has mishandled my data? </p>
<h2>Am I pwned at this point?</h2>
<p>As much it sucks to know that such incident involving my data has happened, turns out I'm actually not that much worried in the end. I have not used the service in question in financial ways, and I used <a href="https://tilde.town/~kzimmermann/updates/20201201_0453.html">a strong and unique password in my account</a> that would not affect me in re-use. With this respect, I can - as I did - simply change my password there again using my password manager and bam - identity problem solved.</p>
<p>A more serious problem is spam. Having read this announcement, it no longer surprises me that my email inbox (a throwaway account that I also use for pseudonyms logins) started receiving mildly targeted spam messages this week, written in Japanese and concerning sites only used by the Japanese. So far it looks like one email address with a random character string like <code>df9weuowrh</code> has still covered the tracks relatively well.</p>
<p>Besides the annoyance, there's also the possibility that my (till then) anonymous address is being marketed in the darkweb and sold around like cracker candy. Who knows, maybe my spam count will go higher in the coming weeks. But if it becomes increasingly annoying, all I'll have to do is create a new address. Let the spam dine in at an abandoned dummy account. It's getting harder to create one fully anonymously and without a phone number, but if all I need is protection against spam or mild stalking, I guess that's a reasonable trade-off.</p>
<h2>Lessons learned</h2>
<p>First and foremost: <strong>compartmentalize your digital life</strong>. Not only this makes it much more resilient in terms of data loss (<a href="https://tilde.town/~kzimmermann/articles/project_128.html">replication of backups</a> over multiple accounts is key), but it also guarantees that a compromised account held in one badly maintained service does not spread out to your remaining online identity (if any).</p>
<p>Second, and this hooks straight into the previous one: <strong>use a password manager</strong>. Sorry, there's simply no excuse for not using one in today's multiple-website and lots of data leakage world. Memorize one long password using something like <a href="https://diceware.dmuth.org/">diceware</a> and use it to unlock a database containing random, distinct passwords for each service you use. If you don't have physical dice with you, you can simulate it with a script like this:</p>
<pre><code>#!/bin/bash
# @diceware.sh: generate diceware-like passphrases
# USAGE: diceware.sh [LENGTH]
# Note: words file location might be different in your system.
# This example works with Debian-based systems.

WORDS=/usr/share/dictionaries-common/words
LEN=5 # how many words?

if [[ -n "$1" ]]
then
    LEN="$1"
fi

shuf "$WORDS" |
    head -$LEN |
    tr "\n" " "
echo
</code></pre>
<p>Third, <strong>slim down your online presence</strong> by deleting any account online you're not using for good. You can only do so much to self-guard your account in a dubious service, the best remedy is to remove at once the data you have online that you don't need to use. What's the best way to know which accounts you're not using? Having a password manager once again conveniently lists out any account you've previously created.</p>
<p>And just like that, I'm not going to lose sleep over this incident, even if it sounded a little scary in the email. Knowing how to do some basic OpSec has cleared me from having to worry about the possible bad outcomes of a data breach like this.</p>
<p>You've been pwned, Senpai, better catch up desuyo.</p>
<hr />
<p>Have you been pwned in a similar way to this one before? What other follow-up actions did you do in that case? What OpSec practices ended up passively protecting you from other consequences? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #18 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Spooktober 2023 scare stories: the scariest moment of my history using Linux</title>
        <link href="https://tilde.town/~kzimmermann/articles/spooktober_2023_linux_scariest_moments.html" />
        <updated>2023-10-28T22:12:27.614630Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Spooktober 2023 scare stories: the scariest moment of my history using Linux</h1>
<p><img alt="Spooktober is here!" src="https://i.imgflip.com/3ed38v.jpg" /></p>
<p><a href="https://www.urbandictionary.com/define.php?term=spooktober">Spooktober</a> is back and I can't let this go without writing some of my "scary stories" accumulated during my time using Linux. Previously, I told a <a href="/~kzimmermann/articles/spooktober_living_dead_computer.html">spooky version of how I started using Linux</a> but there lays a large gap between then and my experience today. They say that the beginning is the hardest part (adaptation, learning and so forth), but could it be that the middle of the journey also houses some frightening moments?</p>
<p>Turns out that yes, the road to mastery has some pretty significant frights - even if you've been in it for some time. And I'd say it might even be a bigger fright because at that point you've "shed your training wheels" and the safety of the beginners' world, and now are darting through new things on your own. And in this Spooktober special, I'm going to share with you my scariest moment in learning and getting used to Free Software, which happened shortly after my first stint with Linux.</p>
<p>It goes a little like this:</p>
<h2>The botched Ubuntu 11.04 update</h2>
<p>Previously, I wrote a humorous account of how I <a href="/~kzimmermann/articles/spooktober_living_dead_computer.html">brought back my old laptop from the dead</a> using Linux (specifically Ubuntu 10.04 LTS), which became my first-ever introduction to what Linux and Free Software were in the first place.</p>
<p>Thanks to that adventure, I was introduced to a wonderful world of software freedom and eventually privacy, and I'll never look back. But just a few years later, I ran into another bump down the road with another subject: <em>updating</em>. See, a year or so down the road, I saw that the constantly evolving Ubuntu system had a shiny new release, 11.04. Codenamed Natty Narwhal, this was it. Bleeding edge software and everything good that Linux had to offer at this point. And plus, it came with a <a href="https://www.engadget.com/2011-04-28-ubuntu-11-04-natty-narwhal-brings-new-unity-ui-controversy-to.html">new user interface called Unity</a> which was as controversial as it was seducing (HUD? Lenses? Whoa).</p>
<p>By then, I had already made myself confortable with Ubuntu in my old chap, how it all worked, but still I was wondering: how would my system be with some pretty cool updates?</p>
<p>So naturally I wanted to update it.</p>
<p>I looked up some instructions on how to perform an update in Ubuntu and figured that the built-in updater would get the job done, being graphical, easy to use and all. Sweetness, let's do it. All seems to be going well!</p>
<p>Except that after about an hour or so of updating, it didn't seem to be working anymore. Oh no. System still responsive, or so it seemed, but the installation wasn't going anywhere as far as I could tell. Ok, let's just give it some time, OK? OK??</p>
<p>Wait another hour or so again, and yup, it looks like it's a lost cause. Ohh no... what do I do now? Clearly I don't have many options: I could wait all night and see if some miracle would happen, or just be practical and face the reality that something broke and probably I wouldn't have a usable system anymore either way. </p>
<p>My mind raced for a while. I thought about the last time such crash happened, and how my previous data was lost forever. I couldn't fathom this happening again, not now with Linux, and with the amazing things I had accomplished since then.</p>
<p>In the end, I chose to be practical and just hard-rebooted. Wait for the next boot to come in and, sigh, no luck. I have an unbootable system, there's nothing I can do about it anymore.</p>
<p>Take a deep breath. Nice and big. Now scream internally.</p>
<p>Rats, I had lost it. I had a perfectly working system, even if it wasn't all the latest software, and now I have nothing. Oh well, nothing that can be done at this point, I have to start over.</p>
<p>And here's where my experience accumulated so far would help me: by then, given my 100% fascination with Linux, I was very used to the process of booting from an external USB drive and exploring a system live, be it in my or other people's computers. This allowed me to cook up a live USB, boot in my broken system with it and, since there was no hardware failure unlike the previous case, I could rescue my data to an external hard drive. That done, and all the tears past, then I'd install again afresh.</p>
<p>Luckily, this time the installation of Natty Narwhal went alright, and I rebooted again into a fully operational system, ready to house my data again. And just like that, that super scary moment in my Linux history went to an end. No more pain.</p>
<h2>Scariness as a measure of lack of experience</h2>
<p>Several lessons could be learned from this stint. And I think the biggest one I can think of is how often something "scary" is just another word for "I don't have experience with it" (or at least when it comes to computers). Today, LiveUSB'ing in and copying data to an external drive would be the very first thing I'd do in any seemingly-complicated crash situation, but back then it was super scary just to even think about the procedure.</p>
<p>After you've had some experience, you get familiar and it all gets much, much less scary. And this goes for everything else in computing: scary error messages, sudden crashes, tight situations, etc. It's often a question of not seeing it before and just having to read logs, searching error messages and asking around for advice otherwise on IRC, forums, etc.</p>
<p>There's no such thing as a software failure so critical it can't be possibly be recovered from by means of a backup and re-install. And even the common hardware failures (often Hard Drive crashes) are just a purchase away from fixing. You can absolutely fix everything in your computer with a little knowledge and a lot of will to learn something new.</p>
<h2>Backup. Always. No matter your experience.</h2>
<p>The other crucial lesson learned in this is the value of backups. This goes without saying, of course, but it's always more highlighted in events such as these.</p>
<p>In this case, I was lucky enough to be able to salvage through the crash and be able to restore the files even though the system itself wouldn't boot. But in another universe, this might not have been the case; what if the filesystem itself became corrupted, and thus unmountable? What if some disk partitioning or re-sizing had been going at the time of the crash and the volume was left unusable? There's simply no excuse around it: back yo' data up now.</p>
<p>If you're looking for a beginner's guide on backups, perhaps my article on <a href="/~kzimmermann/articles/project_128.html">Project 128</a> can offer some light. It doesn't have to be complicated, it doesn't have to have multiple redundancies or be incremental or spread across multiple sites to work. But it has to be there in first place, <em>before</em> the incident, in order to be useful (and you must be able to indeed recover from it).</p>
<h2>Why not create a "bootstrap script?"</h2>
<p>And while the topic of disaster recovery is going on, I'm going to extend it just a tad more to cover one more thing: bootstrapping. I'm not sure if this is the right name for the concept, but I'm referring to a script or otherwise that, after a bare installation of your OS is complete, allows you to configure and customize your machines to a personally usable state as fast as possible. Mine, for example, sets up a full machine for me in under 10min depending on the connection speed.</p>
<p>This could be as simple as a script that installs all the packages you need for desktop use, but it could also add copying config files, creating directory structures, setting keybindings and everything in between. I'm told that in the corporate world this is usually done via Ansible, which I hear allows for an immense amount of customization. </p>
<p>In the context of this post, the purpose of this bootstrap mechanism is to help you get "on your feet" as fast as possible if you make a mistake. Let's say that your system broke because you were curious exploring something in your system and made a mistake. Well, you can't roll back, but you can surely "roll forward" by speeding up the setup back to that point. Not to mention that it also works great when you are setting up new machines.</p>
<h2>Conclusion</h2>
<p>Whoo, what a scary tale! Living is learning and you <em>will</em> keep making mistakes with your Linux usage, some of them that might be quite scary, but rest assured that you will be able to recover from them provided you have regular backups and a desire to keep learning in spite of mistakes. And if you do make these mistakes, a way to speed up the setting up process is great.</p>
<hr />
<p>What were some "scary moments" of your using and learning Linux that you can remember? Are they still scary in hindsight? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post is number #45 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Spooktober Scary Computing Stories: The Night of the Living Dead Computer</title>
        <link href="https://tilde.town/~kzimmermann/articles/spooktober_living_dead_computer.html" />
        <updated>2022-10-26T21:42:27.600111Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Spooktober Scary Computing Stories: The Night of the Living Dead Computer</h1>
<p>October is almost over, and with its cooler weather and orange-themed stuff there tends to be a collection of Halloween themed resources around the internet, complete with scary horror stories.</p>
<p>I don't exactly consider myself a good novelist and I don't know many good spooky stories, but this year I decided to do something different, and try my hand at storytelling mixed with a dollop of computing lessons. The result is this: the <strong>Spooktober Scary Computing Stories</strong> series. This is story #1 of I don't know how many, but that should keep readers entertained and perhaps even share a lesson or two with them.</p>
<p>So, gather closer around the fireplace, children, for tonight I'll tell the story of <em>The Night of the Living Dead Computer</em>.</p>
<p><img alt="Frankenstein scene, but the creation is actually a Dell Laptop" src="https://tilde.town/~kzimmermann/images/spooktober.jpg" /></p>
<hr />
<p>Twas on a gloomy October night much like this one many, many years ago, when I was a young aspiring man in my college dorm. My trusty computing companion of many years (a Dell D620 Latitude laptop) was as usual sitting on my desk, juggling between the class assignments and a few games in between. I've had known this lad for many years throughout my High School, and knew exactly where it would excel or fail in its capacity - a real intimate relationship, if you may call it.</p>
<p>That fateful evening, however, everything would change. I had left old friend on while going out to the cafeteria to get some dinner as usual, but as I came back, I noticed something was wrong. His screen was not off and sleeping as usual, but instead emitted a faint dark grayish glow. It was the sort of light that indicated that it wasn't completely off, but also really struggling to keep itself on. Old chap did not respont to my keystrokes or mouse movement, and despite his fans working hard, it didn't seem that he was going to go anywhere from that point. My heart started pounding faster.</p>
<p>I resorted to the old but sure trick of computing: the hard-reset. I mashed the power button and watched old chap plop off and restart. C'mon, this should be the CPR it needs to get back up. I have stuff in there that I need! But after it passed the POST screen, the same ghastly gray glow appeared and it got stuck there again. This ain't good, I thought, while fruitlessly attempting to do it again and again. It was lifeless.</p>
<p>A few minutes in, and I realized that it was hopeless. Old chap had suffered a stroke of sorts, and would not be coming back as he used to be in this world. I was off to arrange its proceedings and have it formatted by and reinstalled back from the campus IT support desk when a figure, emerging from the shades on the corner of the dorm hall, called on me. It was my dorm room neighbor, whom many saw as a mysterious sorcerer of sorts when it came to computer technology. </p>
<p><em>A computer problem, I see.</em></p>
<p>Carrying Old Chap's lifeless chassis in my arms, I turned slowly to face him.</p>
<p><em>It is a pity that so many face the same issue and think their computers are hopelessly broken, as if dead. Yet, the reality could not be any further than this. Besides, it's too late in the evening at this time to try the IT support desk.</em></p>
<p>I squinted while he approached me with this mysterious tale.</p>
<p><em>Pity what happened to him, but know that uou do not have to replace your old friend, and I dare say, not even have to resort to the quackery of this place's IT support. What I can offer you is the chance to have your friend back in almost the same shape as before through the means of little known but extremely efficient... potion.</em></p>
<p>My neighbor drew out a bright red USB stick from his pocket and motioned me to it. Inscribed in it was a name of what I could only think as an ancient powerful spell: <code>Ubuntu</code>.</p>
<p><em>Apply this and your dear friend will still have another chance...</em></p>
<p>As I reached out to take the mystery stick, the neighbor pulled it back to himself with a final warning:</p>
<p><em>Mind ye, however... that your friend might not come back exactly like his old self. Though powerful, the effects of the potion may work... in mysterious ways sometimes.</em></p>
<p>Contemplating lifeless Old Chap one last time, I drew in a deep breath and took the red stick in my hand. I was ready to do this.</p>
<p>"Excellent. Take him back to your room and plug it in your friend. His hard drive may have failed, but that's not the end of him. Boot from the stick and ye shall see a new mysterious, but powerful world..."</p>
<p>And as he spoke this, the neighbor faded slowly into a dark corner, leaving me alone into the light of the deserted dorm hall.</p>
<p>I immediately went back into my room and cleared my desk. That was it, I was going to bring Old Chap back to life, and I was going to do it now. I put him back into the power socket and plugged the stick into the USB. But before I could even push his power button, my rational mind was already pounding me:</p>
<p><em>This can't be, how will a computer come back to life simply from a tiny USB stick? It doesn't even have a working hard drive! How can it even load anything useful at all?</em></p>
<p>At that point, I had nothing to lose, so I drew a deep breath and slammed the finger into the power button. Watching the POST screen come up was straightforward enough, because after all, I had seen Old Chap's boot sequence countless times. What came up afterwards, however, was beyond my wildest of dreams.</p>
<p>A dark screen with nothing except an orange-brown circular logo and the word UBUNTU greeted me with a loader prompt that looked nothing like old Windows XP's boring scrolling bar thing. Mesmerized, I then watched it turn into a brown-purple background with the sound of drums beating that startled me. <em>How can this be happening? How can all of these come from a tiny USB stick?</em> I thought to myself while watching the mysterious thing unfold itself.</p>
<p>Upon the finishing of Old Chap's new booting process, a two-bar brown-purple desktop greeted me. <em>What is this thing? It ain't no Windows that I know of!</em> I thought. Though very different from what I had seen so far, the interface seemed unexplicably familiar. Could this be a Mac of sorts? And most impressive of all: Old Chap seemed so... <em>light and so goddamn fast!</em> What sort of black magic resides in this USB stick?</p>
<p>I then saw a familiar icon there on the desktop - Firefox. That same browser I use in Windows, but here? Well, let's open it... wait, it's <em>already</em> open? How the hell! In Windows it'd surely have taken way longer. It's browsing the web much like it did before, a spotless experience I'd say. But the freakiest of all: it's not touching the hard drive at all!  <em>My dead computer has come back to life!</em> </p>
<p>"And now you, too, can see it," my neighbor spoke from the shadows behind me, "that it is not because of a fragile and ephemeral hard drive's failure that our hardware is deemed dead. That is but a lie, an illusion perpetuated by those who seek to profit from your ignorance." He walked to my side, resting his hand next to Old Chap's now working chassis. He pointed at the screen.</p>
<p>"The world of Linux isn't always exactly like what you are used to seeing and experiencing in terms of computer. But often we find that this 'different' is rather for the better."</p>
<p><em>Linux? What did he just-</em></p>
<p>"Ah, yes, GNU-slash-Linux. A sane, elegant operating system for an elegant user." He sighed. "If only more of such 'users' were a little more open-minded towards trying new things!" </p>
<p>I noticed there were indeed some things I found strange about this Linux system Old Chap was running. What is this OpenOffice thing? Why is this Explorer thing to find files called Nautilus? Is it an obscure Jules Verne reference? Questions, questions...</p>
<p>He continued: "What your old friend is running at the moment happens to be a distribution of it, oriented towards beginners. It's called Ubuntu."</p>
<p>"B-but just how? How can a computer work without having a hard disk? That's like a human not having a brain or something," I ask.</p>
<p>"There's more to a computer than just a failed a hard drive, even if that's all that your previous operating system allowed you. You can boot it from almost anything. Once you know it, you'll be limitless to have your old friend do everything it can!" Said my neighbor.</p>
<p>He leaned his weight on the desk and motioned to Old Chap again. "Alas, but you so far have only sensed a small taste of all its capacity. To really dive in and experience truly what freedom and power await, you should take the plunge into the next level. No more <em>live medium</em>. You will perform a full install of Linux!"</p>
<p>I almost choked. "A-a what?"</p>
<p>"Yes, a frugal installation straight to a Hard Drive! For there's no other true way of immersing yourself in it. But you will need to procure a new Hard Drive to house your new OS. Take this and make a backup. See what you can salvage." He tossed me a USB key that he fished out of his pocket. And with that, my neighbor walked out into the corridor, disappearing into the shadows.</p>
<p>I took a look at Old Chap and its newfound vitality. It's speed and efficiency. <em>Power</em>. A power I have never had tasted until now. And at that moment I knew what I had to do. Old Chap will need a new hard drive, but it will contain nothing of his old ways. Him and I had just been initiated into Linux, and there was no coming back from now on.</p>
<hr />
<p>How about <em>that</em> for some Spooktober fiction eh? I'm not sure if I have a hand for writing fiction, but guess I could try. And with this being freely licensed, you are free to extend at will the story.</p>
<p>There are definitely some lessons to be taken away from this whole story. I think the biggest one is this: you almost never can really, really brick a <a href="https://boingboing.net/2012/01/10/lockdown.html">(general use)</a> computer. Pretty much anything can be fixed by yourself with a new medium and reinstalling the OS (even Windows). </p>
<p>Caught some incurable sort of malware? Reinstall and you're right. Partition table messed up? Reinstall is usually the easiest way out. Hard drive failed? Guess what. You can even try to salvage (some) data from your older OS install to minimize a little the damage. The only reason why I would take any computer of mine to a repair shop is if some bizarrely complex hardware damage happened (of the likes of logic boards frying or water damage) and I have to replace something really hard inside. Otherwise, I am my own tech support.</p>
<p>A second, but perhaps equally important one: back up your stuff, and check your backups frequently! Had I been cautious with my backups in first place, maybe this story would've turned out much more differently. Perhaps it would've been just a matter of saying shucks and rolling back to the most recent one in a new medium. But my seemingly complete loss at that critical moment left me with a hole in the stomach right then. </p>
<p>A small silver lining at that moment, however, was that I had a clean slate, a stage where I could build from the ground up without any fear of breaking something.</p>
<figure>
    <img src="https://tilde.town/~kzimmermann/images/tyler.png" alt="a drawing of Tyler Durden holding soap in black and white" />
    <figcaption>
        Sort of a "It's only after we've lost everything that we're free to do anything" moment I guess?
    </figcaption>
</figure>

<p>And third: "scary" is oftentimes an euphemism for "I don't really understand it." Typing commands in the <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">command-line</a> is pretty scary for someone who only knows how to point and click. Skydiving is pretty scary for me, who never did it. Social interactions can be scary if you're not well used to them. </p>
<p>So is Linux really necessarily harder to use or scarier than Windows? Or is it just that you've grown used to having Windows everywhere you go (school, home, work, etc) but not Linux? Is FreeBSD really harder to use than Linux? Knowledge changes everything.</p>
<hr />
<p>Have you ever had a "disaster" of this sort happen to you so that you could begin anew in terms of computing? How did <em>you</em> get started with Linux? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<p>Happy Spooktober for everyone producing this theme's content, and hope that you've enjoyed my piece of fiction. Who knows, do you think I should write more of these? Let me know.</p>
<hr />
<p>This post is number #39 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>kzimmermann's State of the Distro - December 2023</title>
        <link href="https://tilde.town/~kzimmermann/articles/state_of_the_distro_2023.html" />
        <updated>2023-12-29T15:03:00.432297Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>kzimmermann's State of the Distro - December 2023</h1>
<p>I've decided to start posting a more or less regular series of update in here, regarding my OS usage and what distros I've been fiddling with lately. I'll post these quarterly, once per season, and maybe this can serve as a kind of thermometer of what my interests and technical expertises are at a given time. So here we go: <em>kzimmermann's State of the Distro</em> - December 2023 Edition!...</p>
<p>... but first, let me lay down my rules:</p>
<h2>Rules of the State of the Distro</h2>
<p>I'll have distributions ranked #1 as per my opinion in the following categories:</p>
<ul>
<li>Desktop usage (not gaming). Could be in an actual desktop or a laptop.</li>
<li>Portable install in USB drive</li>
<li>Server</li>
<li>Raspberry Pi</li>
</ul>
<p>I might add a runner up or two for each if I feel the choice was close, but that's not a guarantee. Finally, I'll add a general overall winner distro in the end. </p>
<p>Of course, a reminder that this assessment is just my own <strong>opinion</strong>, and does not necesssarily reflect an actual technical assessment of these projects. So please don't get upset if you don't see your favorite distro in here somewhere, or if I say something that isn't what you think, okay?</p>
<p>And now, let's go.</p>
<h2>Desktop</h2>
<p><strong>Winner:</strong> <a href="https://www.debian.org">Debian Linux</a> and its systemd-less counterpart <a href="https://www.devuan.org">Devuan</a></p>
<p><img src="https://i.pinimg.com/originals/a4/29/ff/a429ff063cbd4a5760c352cf98c51351.png" alt="debian logo" width="250px" /></p>
<p><em>Cue in the booing from rolling-release fanboys that think that Debian is an outdated, slow to catch up and cringy distro to be installed on the Desktop.</em> Finished? Alrighty.</p>
<p>If I need an OS on the desktop that is easy to set up, reliable, secure and basically will always work when I need throughout the updates, here it is: Debian. But what about the latest version of Firefox? Get it if you want, of course. Set up a backport, download it from a <a href="https://flathub.org/apps/org.mozilla.firefox">flatpak repo</a>, or <a href="https://www.mozilla.org/en-US/firefox/linux/">just get it straight from the source</a>.</p>
<p>The repositories are chock full of things that basically if you can't find a program in there, either that program is probably not a very big deal, or there are third-party community maintained builds for it. Either way, with Debian, you're going to get it.</p>
<p>The criticism that you can't have the latest version of things is valid for some cases, though, especially kernels. For this, I have experimented lately, with huge success, an alternative: just <a href="/~kzimmermann/articles/running_debian_sid.html">run the bleeding edge unstable</a>. Unlike its namesake, Debian Sid (or Devuan Ceres) isn't so unstable that you can't use it daily like a normal OS - I'd evaluate their stability as to about the same as Arch Linux's. You just have to take a few steps of care, otherwise it's very straightforward and confortable.</p>
<p>Stable or not, the experience that I have with Debian on the desktop is always great. Thus it is my choice for this season - and year's - desktop distro.</p>
<h2>Portable install to go</h2>
<p><strong>Winner:</strong> <a href="https://alpinelinux.org">Alpine Linux</a></p>
<p><img src="https://avatars2.githubusercontent.com/u/7600810" alt="alpine linux logo" width="250px" /></p>
<p>There was a time in both mine and the history of Linux distros that the perfect portable LiveCD distro was the golden quest, as holy as the grail for computers, perhaps better described as a neverending search. KNOPPIX has great software available, but too big and slow. Puppy was fast and useful, but some design choices made it a little weird for some. Antix was too confusing: do we really need to choose from 5 DEs and that many modes of operation? Other LiveCDs varied here and there, but missed somethings of my own choice. And surrounding it all: the issue of <em>persistence</em>. How can we "save things" across reboots there?</p>
<p>This year, my quest was over. I installed Alpine in an USB stick, did some customization and that was it - I now have the perfect portable OS for myself. Props of course to the teams developing these others live distros (the amount of stuff that they fit in such a small / lightweight package is amazing), but carrying something that you've customized completely is simply unbeatable. I can even update the whole base of the distribution seamlessly, something that to this day feels clunky with strictly liveCD distros.</p>
<p>With Alpine, I have a solid, secure, yet lightweight base on which I can build the system the exact way I want. And even though it might wear the drive more, <a href="/~kzimmermann/articles/alpine_linux_desktop.html">sysmode, the frugal install.</a> works very nicely, almost like an internal OS. I have yet to find this flexibility and small size in other distros, or BSD.</p>
<h2>Raspberry Pi</h2>
<p><strong>Winner:</strong> <a href="https://www.freebsd.org">FreeBSD</a></p>
<p><img src="https://i.pinimg.com/originals/79/13/3b/79133b288ce990edc0ec6cee9eb58475.png" alt="freebsd logo" width="250px" /></p>
<p>Whoa - a BSD in this Linux fest? Oh yeah. Believe me, when it comes to the Pi (at least until the 4), FreeBSD is <em>better</em> than Linux.</p>
<p>It's better because the performance of it is unlike any other Linux distribution I've ever seen, even with <code>cpupower</code> activated and overclocking. Nope, no match - FreeBSD's performance on the Pi is still way better, even without overclocking. You can browse a modern web, have things scroll smoothly, watch videos and even play some 3D games like Quake with it! And if you overclock it a little (2GHz) you can even make it run that gargantua MS Teams.</p>
<p>But what about all that lackluster driver support? WiFi drivers still on the 802.11g standard and all? Surely you can't be serious about it when Linux offers all that support out of the box, right? </p>
<p>Wrong, actually. For starters, the drivers provided for the Pi's hardware are often half-assed proprietary blobs, nicely obscure by its manufacturer Broadcom - of the same fame of the shitty <a href="https://wiki.archlinux.org/title/Broadcom_wireless#History">b43 WiFi driver</a>. This culminates in drivers that to this day don't work completely even in the official Raspberry Pi OS, like graphics that don't support xrandr or a simple <a href="https://jonls.dk/redshift/">redshift</a> screen dimming. And the audio through the 3.5mm jack? One-way only. Can't record it - ever. And even the built-in dual band wifi is shaky - often I won't see 5GHz APs.</p>
<p>Thus, with all this considered, I take back <a href="/~kzimmermann/articles/30_days_on_a_pi.html">this comment</a> that I made last year about FreeBSD on the Pi:</p>
<blockquote>
<p>(...) I would have kept FreeBSD in there, since it's also an amazing OS, but there was one major caveat: lack of support for wifi and sound. Though I could've used a USB dongle for WiFi (not optimal, but hey), lack of sound was a large disappointment (...)</p>
</blockquote>
<p><strong>Rectifying:</strong> I no longer think FreeBSD is really at fault if the driver support for the hardware is not helpful to begin with. Even drivers you find for Linux are shaky at best.</p>
<p>So yes, I will keep using FreeBSD on the Pi. As a desktop. With USB WiFi and audio adapters for those services, because the existing hardware is sort of moot even otherwise. And with <em>those</em> USB adapters - and FreeBSD - the Pi works <em>really</em> well, truly desktop-like.</p>
<h2>Server</h2>
<p><strong>Winner:</strong> Debian Linux.</p>
<p><img src="https://i.pinimg.com/originals/a4/29/ff/a429ff063cbd4a5760c352cf98c51351.png" alt="debian logo" width="250px" /></p>
<p>This is somewhat an unfair assessment because I don't really set up many servers at home, neither do I make much heavy use of it. I do have a web server (this site) and an <a href="https://kchat.port0.org">XMPP server</a> set up on a Raspberry Pi, but otherwise don't use them a whole lot. But I found that using Debian on the Pi is a real joy. Easy and simple to set up, familiar environment and all. So I'm keeping it.</p>
<p>This concept is about to be overshadowed, however, by my growing like of FreeBSD lately. Maybe I should try subbing it for a while next year? Or exploring more the server environment as a whole.</p>
<p>(why not Devuan here? Because I couldn't find an image to work on with the Raspberry Pi...)</p>
<h2>Overall distro of 2023</h2>
<p><strong>Winner:</strong> Debian Linux or Devuan</p>
<p><img src="https://i.pinimg.com/originals/a4/29/ff/a429ff063cbd4a5760c352cf98c51351.png" alt="debian logo" width="250px" /></p>
<p>With all the praise I gave to Debian in this article, this isn't very surprising. Truth is, I'm so used to Debian (and more recently with the unstable branch) that given a choice to <a href="/~kzimmermann/articles/frankensteining_salvaged_laptop.html">"liberate" another computer</a>, Debian would be the first thing that would come to mind. Of course, I'm still up to experimenting with other distributions, especially distributions that I have never really tried seriously before like Fedora. </p>
<p>The only sad part of Debian is that they <a href="https://www.theregister.com/2023/12/19/debian_to_drop_x86_32/">decided to drop 32-bit x86</a> from its supported architectures in future releases, which means that my myriad of old computers back home will have to be eventually migrated to yet another distro - probably Alpine. What can I say, though, it was a great ride, and it taught me a lot of what I know about Linux.</p>
<p>And while I don't really mind too much SystemD, I found OpenRC to be a lot more comfortable along the other distros I chose to use, so in those cases I'd be choosing Devuan instead.</p>
<h2>Conclusion</h2>
<p>So there you have it, my top distros for Dec 2023 ranked in my use cases, and all of course according to my opinion. What do you think of my choices? Is there any distro that you'd recommend me to try as well? I know that there are areas like OpenSUSE and the Fedora families that I have not ever touched in my life. Let me know your suggestions!</p>
<p>And in the meantime, Happy 2024 and see you on my next edition!</p>
<hr />
<p>This post is number #49 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>The battle for privacy implies metadata protections...</title>
        <link href="https://tilde.town/~kzimmermann/articles/the_battle_for_privacy_implies_metadata.html" />
        <updated>2021-01-16T09:31:33.461886Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>The battle for privacy implies metadata protections...</h1>
<p>I feel lately that the whole discussion about end-to-end encryption and privacy implications are starting to miss the point.</p>
<p>Sure: e2ee is a requirement for privacy of communications (it's the most basic of all of them, actually). However, the next fight for your data lies not anymore on the <em>content</em> of your communications, but rather, on the <strong>context</strong>. Yes, I'm talking about metadata.</p>
<p>This shift was like the wars to liberate software - that is, code - from proprietary developers back in the 90s and early 2000s. Nowadays, whether the code for and application is open source or not is moot because almost everyone makes it open source. the real money is being made with the data said applications collect about you.</p>
<p>So encryption of content is a no-brainer, a basic requirement that everyone should be implementing as the default. But the next battlefield, our next area of concern, should be metadata. Developers and hackers should take care to make their appplications store as minimum data as possible from users, and to make metadata as hidden as possible. </p>
<p>For now, only a few messengers do this as the default that I know of: the <a href="https://briarproject.org/">Briar Project</a>, <a href="https://getsession.org/">Session</a> and the back-from-the-dead <a href="https://ricochet.im/">Ricochet</a>. Every other messenger (yes, including Signal and XMPP) will have to implement workarounds to achieve this metadata anonymization, namely running them through <a href="https://torproject.org">Tor</a>.</p>
<p>If we start thinking about metadata <em>first</em> (because, after all, it is practically the only thing that's worth collecting) and by default take actions to avoid it, questions such as the <a href="https://www.bloomberg.com/news/articles/2021-01-11/why-whatsapp-s-privacy-rules-sparked-moves-to-rivals-quicktake">updating of the WhatsApp privacy policy in 2021</a> could become moot - there would be simply nothing interesting to collect.</p>
<p>What do you do to protect your privacy of metadata when communicating?</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>The good web and the bad web</title>
        <link href="https://tilde.town/~kzimmermann/articles/thegoodweb.html" />
        <updated>2020-09-18T05:12:43.043412Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>The good web and the bad web</h1>
<p>I find fascinating how tilde.town aims to recreate the internet as it was in its primordials, the late 90s and outside of the browser even older, through ssh and the social services available inside the server. I'm afraid, however, that this is the latest version of the "Good Web," since corporate greed and surveillance basically ruined everything since then.</p>
<p>Nowadays I don't surf the internet without protection any more than I would have sex without protection. Especially when personal data is being shared by 3rd parties in the background more often than we can imagine. However, a small subset of the internet still remains non-intrusive, efficient and fun, all traits that for me characterize the Good parts of the web.</p>
<p>Here's a small list of what they are, and why:</p>
<h2>The good web</h2>
<h3>Tilde.town</h3>
<p>A safe home for anyone wanting to express creativity and learn more about Linux and technology, kindly mantained by <code>vilmibm</code> and mods.</p>
<h3>Diaspora network</h3>
<p><a href="https://diasporafoundation.org/">Diaspora</a> is a decentralized social network with privacy and anti-censorship in mind. It feels like a mix between some kind of Facebook and Tumblr, where you have a profile and can view content posted in several instances (called pods) around the internet. Speech is quite liberated, and tracking can be completely removed by choosing the right pod.</p>
<h3>GNUSocial / Mastodon</h3>
<p>Two "cousin" networks with a user experience similar to Twitter. I myself used to be quite active in one instance until it shut down. Like Diaspora, it's distributed and users are encouraged to add more instances to make the federation stronger.</p>
<p>Although GNUSocial seems to have diminished in popularity, <a href="https://joinmastodon.org/">Mastodon</a> seems to be growing everyday. It even has a command-line client, called <code>toot</code>.</p>
<h3>Wikipedia</h3>
<p><a href="https://en.wikipedia.org">This</a> is the closest as you'll ever get to a truly Free Education or University Degree.</p>
<p>Just learn how to take some of the content with a grain of salt (e.g. breaking news, heavily politicized subjects, some historical figures), and the rest is pretty damn good - and free ($$$).</p>
<h3>Archive.org</h3>
<p>A treasure trove of files and content extracted from around the web. The closest to a real library as you can get online. Search hard enough and you may even find some piracy (ahoy!)</p>
<p>Also your next source of great <a href="https://archive.org/details/etree">Live Music taped straight from concerts</a>.</p>
<h2>The bad web</h2>
<p>Anything that requires an "app" to access.</p>
<p>Anything that requires Javascript to work (or be presentable).</p>
<p>Anywhere images and "scripts" outweight the actual content.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Updating to Debian 11 Bullseye</title>
        <link href="https://tilde.town/~kzimmermann/articles/updating_debian_11.html" />
        <updated>2021-08-28T01:18:39.328177Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Updating to Debian 11 Bullseye</h1>
<p>Though I have not daily-drived Debian GNU/Linux for a couple of years (stopped around the Jessie-Stretch area), a recent proposition of interest (the <a href="https://tilde.town/~kzimmermann/articles/my_old_computer_challenge.html">Old Computer Challenge</a>) revived my interest in it again, where I picked it back up on the Buster release, version 10.3. Little did I know at the time, though, that there was another big release coming up just around the corner.</p>
<p>Debian 11, codenamed <strong>Bullseye</strong>, was <a href="https://www.debian.org/News/2021/20210814">released a few weeks ago at this time of writing</a>, on August 14th 2021. This was a great piece of news, as this OS not only is signature-stable and has a huge load of software ported to it, but also is the rock-solid base on which a myriad of other OSes base themselves on. With this update, the kernel 5.10 line finally makes its debut on Debian and lots of other big updates arrived as well. And with this announcement just after I did the challenge, I got to thinking: hey, it's time to update!</p>
<p>It should be noted that most releases of distributions come with release bugs that even the most extensive of tests can't completely iron out, and usually I wouldn't do this update is such a short period after the first release. My game plan usually involves waiting out a few more weeks, perhaps a month and a half before I feel the release has matured enough - which also gives me time to catch up on <a href="https://tilde.town/~kzimmermann/articles/project_128.html">backups</a> before I jump it. Why did I do it so early this time? Because the machine I'm using is mostly a throwaway one, having been sourced - quite literally - <a href="https://tilde.town/~kzimmermann/articles/old_pc_new_tricks.html">from the trash</a> and housing almost no data in it. If it broke, big deal!</p>
<p>So perhaps you might wanna consider taking a little more time before updating your box if it contains important data or has a lot of services and specific software installed in it. But here's how it went for me anyway:</p>
<h2>Preparations</h2>
<p>This is the step before the update where you back up all your data and config files before jumping in the water, but for the reasons pointed above, I mostly skipped them. Here's what I would have done, though.</p>
<p>First and foremost, back up not only your user data (which you should already have backed up regularly, right?) but also your <em>configuration files</em>, both at dotfile level inside <code>$HOME</code> and for each service run by your machine, like your web server, database server, Tor relay, etc. Sometimes their config files change per large update, and you should keep a backup to restore them if they get overwritten.</p>
<p>Afterwards, that's where the magic starts to happen in the actual update. There are really only three basic steps to updating a Debian machine to a newer release:</p>
<ol>
<li>Point <code>sources.list</code> to a the new release.</li>
<li>Update <code>apt</code> and install new packages.</li>
<li>Reboot.</li>
</ol>
<p>Sounds easy enough? That's because it is. But let's delve a little deeper into the process' steps next:</p>
<h2>Update sources.list</h2>
<p>The Debian team makes a <em>huge</em> effort (and by huge, I <a href="https://micronews.debian.org/2021/1628939409.html">really mean huge</a>) in building the new packages for the new release, and they are stored isolated from the previous releases so that dependency mismatch doesn't happen when installing things via the package manager. When you want to change the release, you have to explicitly tell <code>apt</code> to use the new package sources, which is done by configuring the file <code>/etc/apt/sources.list</code>.</p>
<p>The format and default syntax for Bullseye have changed slightly from previous releases of Debian, which is hinted by its <a href="https://wiki.debian.org/DebianBullseye">release notes in the Debian wiki</a>. Essentially, you should change the lines for <code>main</code>, <code>updates</code> and <code>security</code> to the following:</p>
<pre><code>deb http://deb.debian.org/debian bullseye main
deb http://security.debian.org/debian-security bullseye-security main
deb http://deb.debian.org/debian bullseye-updates main
</code></pre>
<p>Note that the <code>security</code> line changed slightly since other releases. Since I enabled the other repositories, and had to install nonfree wifi drivers, I also added the following lines in my file:</p>
<pre><code>deb http://deb.debian.org/debian/ bullseye contrib non-free
deb-src http://deb.debian.org/debian/ bullseye contrib non-free
</code></pre>
<p>Save and close the file, now you're ready to perform surgery!</p>
<h2>Updating and installing new packages</h2>
<p>To make sure everything gets updated correctly and that nothing breaks mid-update, I started to adopt a "strict mode" preparation before any large update of a distribution, which I first talked about in my <a href="https://tilde.town/~kzimmermann/articles/upgrading_freebsd_13release.html">upgrading to FreeBSD 13</a> article. I borrowed this method from the <a href="https://siduction.org/">Siduction</a> distribution, where they recommend doing this every time you run a major update, and has served me well.</p>
<p>First, close all your running programs, log out of the graphical session and log into the virtual console (TTY) with Ctrl+Alt+F1. Log in as root and drop to text-only mode with:</p>
<pre><code># init 3
</code></pre>
<p>This will kill any running X session, which could break the system if running through an update, but will keep networking and etc running. Only then, issue the update command:</p>
<pre><code># apt-get update
# apt-get dist-upgrade
</code></pre>
<p><code>apt</code> will offer to upgrade all packages to the new distribution's ones. This is a sizeable download (mine being about 1.6GiB), and might take considerable time depending on your connection. As the files are unpacked and begin to be installed, pay attention to the changelogs (displayed once before any packages begin to be installed) and the configuration file changes (aren't you glad you backed them up before?). Since I had a rather simple barebones system here, I gladly accepted everything as per the new maintainers' versions and everything finished nice and easy. Once <code>apt</code> finishes its job, it's time to go ahead and reboot into your new system!</p>
<h2>Solve remaining issues after the reboot</h2>
<p>Congratulations, you're now running Debian Bullseye. Now comes the fun part: now that all the old and perfectly working packages are flushed, and you're in your shiny new system, it's time to resolve any outstanding bugs that remain after upgrade! Wait, did I really say "fun?" Perhaps there's a better word for it.</p>
<p>Though the installation went spotless, the only issue that I found with Bullseye post reboot was with the network. Though nothing happened to the drivers I previously installed on it, it seems that Bullseye decided to abandon the previous <code>wicd</code> package that was used in Buster to manage the Wifi connections. As a result, I found myself with no internet, and there wasn't even <code>wpa_supplicant</code> to manage the connection manually <a href="https://tilde.town/~kzimmermann/articles/freebsd_desktop_part_2.html">like in a trick I previously used in FreeBSD</a>. Oh fie, guess that's what you should read changelogs for!</p>
<p>Hook up the ethernet cable, put the interface up and I'm ready to install some sort of connection manager for myself. My go-to solution was <code>network-manager</code> which has both graphical (applet-like) and a CLI interface, named <code>nm-tui</code>. I think it's also the default from Bullseye and on? Regardless, from there on, it was just a matter of detecting my network, authenticating and I'm back again!</p>
<p>Mind you, however, that this was a very simple update on a machine that didn't have many services running. Had this been a busy server with many different services running, the story could've been much more complicated.</p>
<h2>Conclusion: excellence and ease of use wrapped into one</h2>
<figure>
<img src="/~kzimmermann/images/chunky_bullseye.png" alt="Screenshot of my Debian Bullseye Desktop" />
<figcaption>
My final desktop after the update to Debian Bullseye from Buster. Makes me want to install it on more machines!
</figcaption>
</figure>

<p>Like always, Debian Bullseye offers a great experience to the user and no cost of usability, power or security. Perhaps not as fun as building my system from zero with a distribution like Arch or Alpine Linux, but it's very robust and you can get to speed without lagging behind on system configuration. On the server, it maintains the tradition of stability and reliability offered by the Stable line of Debian. </p>
<p>Props to everyone working with the Debian Project for this great release! Perhaps it's time for me to start contributing and giving back to the community, after enjoying Debian for so long. I just wonder where and how.</p>
<p>Have you installed or updated to Debian Bullseye already? How easy or hard was that? Did you find any bugs? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #26 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Safely updating to Kernel 6.6.9 in Devuan Ceres</title>
        <link href="https://tilde.town/~kzimmermann/articles/updating_kernel_6.6.9_devuan.html" />
        <updated>2024-01-09T21:31:10.146909Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Safely updating to Kernel 6.6.9 in Devuan Ceres</h1>
<p><img alt="Devuan Logo ripped in half" src="https://i.postimg.cc/Pqfxcny3/broken-devuan.png" /></p>
<p>Happy new year, fediverse! It seems that almost on cue, following my first edition of the <a href="/~kzimmermann/articles/state_of_the_distro_2023.html">State of the Distro</a>, my weapon of choice distro <a href="https://www.devuan.org/os/releases">Devuan Ceres</a> decided to break on me. Oh well, it happens. I had left on vacation for a few days and had not touched my computer, so as soon as I was back, I naturally wanted to update it to the cutting edge again, and my commands went to it almost by muscle memory.</p>
<p>This <code>apt upgrade</code>, however, did not go well. In particular, it seemed that Linux kernel 6.6.9 was choking the installation, to the point that <code>apt</code> wouldn't finish its job because it couldn't set the new kernel's package. I kept getting error messages that went kind of like this:</p>
<pre><code>dpkg: error processing package linux-image-6.6.9-amd64 (--configure):
installed linux-image-6.6.9-amd64 package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of linux-image-amd64:
linux-image-amd64 depends on linux-image-6.6.9-amd64 (= 6.6.9-1); however:
Package linux-image-6.6.9-amd64 is not configured yet.
dpkg: error processing package linux-image-amd64 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of linux-headers-6.6.9-amd64:
linux-headers-6.6.9-amd64 depends on linux-image-6.6.9-amd64 (= 6.6.9-1) | linux-image-6.6.9-amd64-unsigned (= 6.6.9-1); however:
Package linux-image-6.6.9-amd64 is not configured yet.
Package linux-image-6.6.9-amd64-unsigned is not installed.
...
</code></pre>
<p>That sounded strange at first. What would prevent the kernel from being set-up like this? Header dependencies?</p>
<p>I scratched my head for a while and eventually gave up. I headed straight to <code>#devuan</code> in Libera and asked around. First comment that came around was: is your installation <strong>usrmerged</strong>?</p>
<p><em>Say what?</em></p>
<p>Yup, <a href="https://wiki.debian.org/UsrMerge">usrmerged</a>. Apparently, the Debian devs had been pushing for a move of most core utilities away from the root level of the filesystem into <code>/usr/</code> instead, and for programs that queried things in <code>/bin</code> or <code>/lib</code>, they would leave a symbolic link pointing to the directory one level down instead. I wasn't aware that this was even going on, but it seems that from this kernel and on, it's now the rule. This explained why I got so many firmware errors, and some kernel modules didn't load either:</p>
<pre><code>update-initramfs: Generating /boot/initrd.img-6.6.8-amd64
W: Possible missing firmware /lib/firmware/rtl_nic/rtl8125b-2.fw for module r8169
W: Possible missing firmware /lib/firmware/rtl_nic/rtl8125a-3.fw for module r8169
W: Possible missing firmware /lib/firmware/rtl_nic/rtl8107e-2.fw for module r8169
...
</code></pre>
<p>No wonder, huh. All that firmware is now under <code>/usr/lib/</code>...</p>
<p>So, OK, that's the problem. How do it fix it, then? Manually symlink everything from <code>/lib</code> to <code>/usr/lib</code>? While you could indeed do that (and I tried for a few easy binaries such as <code>modprobe</code>), I was informed that the Devuan repos include a convenient package from Debian itself designed to aid exactly on this very process. It's aptly named <code>usrmerge</code>, and once you install it, the post-install hook will attempt to do that merging (i.e. symlink everything from root to <code>/usr</code>) by themselves.</p>
<p>Sounds good, right? Except that when I installed that package, it <em>also</em> failed and I was left back to square 1. At that point I decided to take a closer look into the error messages log and figured out what was going on.</p>
<p>The post-transaction hook of <code>usrmerge</code> executes a Perl script placed in <code>/usr/lib/usrmerge/convert-usrmerge</code> whose function is to check and automate all of the moving of libraries and executables to the right final location, and adding a symlink in the original place. The one critical caveat, though: that script does not do merging, and will stop if the same file (not link) already exists in both <code>/lib</code> and <code>/usr/lib</code>. This is what happened to me, but the fix was simple, thankfully.</p>
<p>Taking a SHA256 hash for these duplicated files in both <code>/</code> and <code>/usr</code> locations revealed what I suspected: they were the same. Meaning that you could safely remove the ones under <code>/</code> and the script would fill the space in with a link! After that, the script broke again, in another duplicated file that again was the same in both locations. I repeated the previous steps for all of those duplicates and in the end, it was all clear!</p>
<pre><code># /usr/lib/usrmerge/convert-usrmerge
The system has been successfully converted.
</code></pre>
<p>Could that have been it? Absolutely. Case in point, I ran another <code>apt upgrade</code> just after it and it ran smoothly, kernel 6.6.9 installed and not a single error thrown. After a reboot, the new kernel loaded smoothly, all my drivers were loaded and my desktop was restored to normality. We can keep rolling again!</p>
<h2>Conclusion</h2>
<p>In summary, if you're stuck in Devuan Ceres unable to update to kernel 6.6.9, this is the workflow I used to solve it:</p>
<ol>
<li>Install <code>usrmerge</code>.</li>
<li>Watch the installation fail (yes, it will fail. That's OK).</li>
<li>Run <code>/usr/lib/usrmerge/convert-usrmerge</code> manually (as root) and read the error message to find out the problematic files that are duplicated between <code>/lib</code> and <code>/usr/lib</code>.</li>
<li>Compare the two files with <code>sha256sum</code> or similar hash function.</li>
<li>If they are the same, rm the duplicate from the <code>/lib/</code> prefix (not <code>/usr/lib</code>).</li>
<li>Re-run <code>convert-usrmerge</code> and watch it fail again, indicating another duplicated file found.</li>
<li>Repeat steps 4 to 6 until everything is clean and you have no errors, just the message <em>"The system has been successfully converted."</em></li>
<li>Update your system with <code>apt</code> normally. If the installation of the kernel runs smoothly, reboot afterwards.</li>
</ol>
<p>If you're <a href="/~kzimmermann/articles/running_debian_sid.html">"installing" Devuan Ceres</a> from a stable release, you may also have to do these steps before upgrading it to ensure that your entire update goes well.</p>
<p>All things considered, there's a reason why the distribution is labelled as Unstable. These small things happen but thankfully we can work around them, and I am yet to suffer an irrecoverable error on running this OS. I guess that doing these small fixes - and learning from them - is one of the charms of using Linux <code>;)</code></p>
<hr />
<p>Did kernel 6.6.9 break your Devuan install or in another distro? What did you do to solve it? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon</a></p>
<hr />
<p>This post is number #50 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Finally, the 50% mark!</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Upgrading to FreeBSD 13.0-RELEASE</title>
        <link href="https://tilde.town/~kzimmermann/articles/upgrading_freebsd_13release.html" />
        <updated>2021-04-23T08:29:03.017379Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Upgrading to FreeBSD 13.0-RELEASE</h1>
<p>Recently the FreeBSD project announced the <a href="https://www.freebsd.org/news/newsflash/#2021-04-13:0">official release of FreeBSD 13.0-RELEASE</a> which I found very exciting given my <a href="https://tilde.town/~kzimmermann/articles/freebsd_desktop_part_2.html">recent experience with it on the Desktop</a>. Upgrading a full OS has always made me a little anxious as to what could/would break, so as excited it made me, I was also a little aprehensive to try it out.</p>
<p>Luckily, I had a few "tools" available to soften up the process, and great documentation on FreeBSD's part again provided a lot of good information, and I can say the transition went smoothly. Perhaps I could say that it's as easy as upgrading a Debian install, perhaps even easier, but it does take some time, perhaps more than some Linux Distros. This post outlines how the process goes and my experience with trying it cold turkey.</p>
<p>Let's go!</p>
<h2>Which release do you choose?</h2>
<p>One of the things that stand out from FreeBSD's release system is its rather unique release naming method: besides the version number (13.0 being the most recent as of this writing), you must choose between three branches of <code>STABLE</code>, <code>RELEASE</code> and <code>CURRENT</code>.</p>
<p>This was something that confused me a little initially as well, but upon some searching (I believe this is somewhere in the Handbook as well), it's actually similar to the way Debian does its releases. Without going into too much technical detail, these are the main difference between the three:</p>
<ul>
<li>RELEASE is the extensively-tested, super-stable release of FreeBSD. Updates are rare, usually reserved to Security patches and crucial stuff, but is the most stable release of all the software. It's sort of analogous to Debian Stable, and I hear it's the version recommended for servers.</li>
<li>STABLE is the fresh, tested version of FreeBSD, which although less tested than STABLE, software here gets updated more frequently, and is a step closer to more up to date stuff, suitable for daily use as your desktop OS or server alike.</li>
<li>CURRENT is analogous to Debian Sid (Unstable): the "cutting edge" of software in the FreeBSD world. It receives <em>some</em> testing, but its focus is the latest releases of software in FreeBSD. I've never used it, but read elsewhere that this version is more tested than Debian Sid, and can be used as a main OS if caveats are accepted.</li>
</ul>
<p>The Handbook, as usual has <a href="https://docs.freebsd.org/en_US.ISO8859-1/books/handbook/current-stable.html">a very detailed explanation concerning the Development Branches of FreeBSD</a></p>
<h2>Procedure overview</h2>
<p>The FreeBSD project has created a tool called <code>freebsd-update</code> which greatly simplifies the entire upgrading process, especially if you only use the <code>pkg</code> tool to manage binary packages for additional software. There are also source-based upgrades but as I never use the ports software save for very specific exceptions, I didn't explore that path.</p>
<p>Using this method, the upgrading process becomes very simple, and actually resembles the way that Debian is upgraded between two Stable releases. The macroprocess is as follows:</p>
<ol>
<li>Point the <code>freebsd-update</code> tool to the new release and populate it with updates.</li>
<li>Install the updates to the system base.</li>
<li>Reboot to update the base system.</li>
<li>Update the rest of the system.</li>
</ol>
<p>The total time taken for me was about 20min, not counting the reboot and command-entering, but the system was also fairly simple, so it might vary.</p>
<p>Let's see each step of the process.</p>
<h2>Warning: make sure you have enough disk space!</h2>
<p>Probably a no-brainer for most people, but <em>make sure you have a few Gigabytes of space available on your system before attempting to upgrade</em>. This is because the <code>freebsd-update</code> tool will store the updates in cache before performing the update, and these can get significantly large, the equivalent of downloading a new FreeBSD ISO off the internet.</p>
<p>Chances are your computer will have plenty of space to acommodate for this, but when I first tried this on a VM, the space I reserved for it to begin with was too small to house these updates, <strong>but FreeBSD did not warn me of it</strong>. As a result, the upgrading process clogged right after the fetching of files (disk usage went up to 106%, go figure), and the system became unusable.</p>
<p>Lesson learned the hard way: make sure you have more than just a few GB left of Disk space before running the update.</p>
<h2>Run the show</h2>
<p>Got enough space and an internet connection? Good. Let's get going:</p>
<h3>Prepare the system</h3>
<p>In general, to avoid conflicts and things breaking unrecoverably from one update to another, I tend to bring the system down to the bare minimum: the <a href="https://tilde.town/~kzimmermann/articles/living_in_linux_terminal.html">console session</a>.</p>
<p>I recommend a full reboot to clean up any other unnecessary process and start afresh, but if you'd rather not, quit your window manager and drop down to the shell, and make sure there are no other sessions running on the other TTYs. Now that you're down to the barebones of the system, it's time to get busy.</p>
<h3>Update the base</h3>
<p>Log in as <code>root</code> and point the <code>freebsd-update</code> tool to the desired release, like this:</p>
<pre><code>freebsd-update -r &lt;release version&gt; install
</code></pre>
<p>Since I'm updating from 12.2-RELEASE to 13.0-RELEASE, my command is:</p>
<pre><code>freebsd-update -r 13.0-RELEASE install
</code></pre>
<p><code>freebsd-update</code> fetches the release files and caches them locally. Upon all fetching, the installation begins with each updated package of the base system replacing the older version. Should any discrepancy arise between the local configuration files and the updated ones, the script will warn you of so.</p>
<p>This is the step that takes the most time due to the download size, but you'll see a progress counting how many packages have been downloaded as sort of a way to know how long you have left. After the system base has been updated, you'll be prompted to reboot.</p>
<h3>Update the rest of the system</h3>
<p>Once you reboot, you're actually only halfway done: there's still the need to update the rest of the system. Turns out that the <code>freebsd-update</code> tool still keeps cached some packages locally that must be installed upon the reboot, so this is the time for it:</p>
<pre><code>freebsd-update install
</code></pre>
<p>The remaining updates will be installed. Finally, for good measure, install the remaining packages normally with <code>pkg</code>:</p>
<pre><code>pkg upgrade
</code></pre>
<p>After this, reboot and voila: you're running the most recent version of FreeBSD.</p>
<h2>Conclusion</h2>
<p>Upgrading FreeBSD to another release is pretty easy when you manage it in a binary package basis. The commands are:</p>
<pre>
freebsd-update -r 13.0-RELEASE upgrade
reboot
freebsd-update install
pkg upgrade
reboot
</pre>

<p>And this should be it, at least here I had no issues so far. If you're using a source-based installation or ports, however, I'm not sure how the upgrade goes. Some comments on IRC spoke of taking hours or even days to complete, or people upgrading their "repository servers" first before upgrading their main machines. Perhaps that should be something I must revisit on my next upgrade. In the meantime, <a href="https://docs.freebsd.org/en_US.ISO8859-1/books/handbook/makeworld.html">source updating</a> is also covered in the Handbook for everyone that's interested.</p>
<hr />
<p>Have you recently upgraded to FreeBSD 13.0-RELEASE? How did the process go? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<p><strong>UPDATE:</strong> whoops! Seems that I confused the meaning of <code>STABLE</code> and <code>RELEASE</code>, it's actually the <a href="https://docs.freebsd.org/en_US.ISO8859-1/books/handbook/current-stable.html">other way around</a>. <code>STABLE</code> is akin to Debian Testing, not quite bleeding edge but still being tested. When all testing is clear, things move to be <code>RELEASE</code>d. Thanks to <a href="@mpts@mastodon.social">0mp</a> for posting that out.</p>
<hr />
<p>This post is number #14 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Using TOTP Two-factor Authentication like a pro</title>
        <link href="https://tilde.town/~kzimmermann/articles/using_totp_like_a_pro.html" />
        <updated>2023-11-19T22:27:37.000407Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Using TOTP Two-factor Authentication like a pro</h1>
<p><img alt="KeepassXC logo" src="https://i.pinimg.com/originals/9f/6a/5f/9f6a5f13fa6aad9ea8b9ce2b604ef752.png" /></p>
<p>I'll be the first to assume that I'm constantly learning new things about my computing usage these days, be they basic or advanced. I learn of new simple things that make me go "ohh snap! Why didn't I think of it?" and of power user-level stuff that changes my workflows completely as well. This is how my computing knowledge and experience grows, and it's how I mature my knowledge of using a computer - free software on it or not.</p>
<p>Thus, despite thinking that I had <code>1337 sk1LLz</code> concerning password and credential management, I was quite surprised to learn just a few months ago about how you can set up two-factor authentication on the software that I last expected: <strong>Keepass</strong>.</p>
<p>If you learned how to use a <a href="https://en.wikipedia.org/wiki/Password_manager">password manager</a> before, chances are that you already use Keepass or heard of it. It was the one I first tried about six years or so ago, and it remains the one I still do, but until very recently, I only used it to store passwords.</p>
<p>Sounds like a familiar situation? Then allow me to show you how you can take it one notch above. this time with your (time-based) 2FA.</p>
<h2>WTF is TOTP 2FA?</h2>
<p>The vast majority of two-factor authentication methods to online services (often called "tokens" by banks and financial services) is based on the <a href="https://en.wikipedia.org/wiki/Time-based_one-time_password">Time-based One-time password</a> method. I'm sure you've seen them before: six-digit codes shown in plain sight that change every 30 or 60 seconds. Enter them in correctly after your password and boom - you're in!</p>
<p>The first time I saw this in action, it had an almost magical aura to it. The bank or authenticator app generated a code and the website magically guessed it correctly, as though it had read my mind. And I also thought it was specific to those apps. Surely, only my bank's app could generate that magical, unique code, right? </p>
<p>Wrong! The algorithm for this is described in detail in the <a href="https://datatracker.ietf.org/doc/html/rfc6238">RFC 6238 of the Internet Engineering Taskforce</a>, and anyone can implement it. In a <em>very</em> simplified explanation, it goes like this: a "hashed" message (more accurately, HMAC) is produced from a secret value known to two parties, say, a server and a client. To prevent a message from being re-used in the future by an attacker, the two parties add the factor of time into their calculations so that they can produce different messages in the future and validate each others' messages.</p>
<p>The strength of this is that the actual messages (the six-digit codes) are ephemeral; they become useless after a minute or so. This means you can combine them in with something more secret like a password and have two independent layers of authentication to match. The really important part of the TOTP is the permanent <em>secret seed</em>, which is the actually required information to derive the correct codes between the client and the server.</p>
<p>And because the algorithm is public, lots of Free Software implementations exist for it. Which takes me to the next point.</p>
<h2>Adding TOTP to KeePassXC</h2>
<p><a href="https://keepassxc.org/">KeePassXC</a> is the latest implementation of Keepass, and it includes full support for TOTP tokens. The way you do set them up, though, is not very clear, and I had not discovered until recently. Here's the step-by-step.</p>
<h3>Before you begin: set your computer's time correctly!</h3>
<p>As you may have figured out from the previous description of the algorithm, a TOTP's value depends on the (computer) time that is used as input to the function. In other words, you and the server authenticating you must have <em>the same time accurately set</em> in order to work correctly - or at least that's what the server on the other side will assume of your computer. Come in with the wrong time set and guess what - future authentications may fail at the most random occasions!</p>
<p>Since time should be very accurately set, do not rely on looking at the wall clock in your house and using the <code>date</code> command. Instead, use something like the NTP protocol to sync your machine accurately to the second. Keeping it in sync with <code>ntpd</code> later is highly recommended, too.</p>
<h3>Step 1: create a password entry if you haven't yet</h3>
<p>You must have a password entry already created in order to add the TOTP to it. If you have used it before to manage your passwords, chances are that you already have several of them laying around. If not, click the plus sign button to add a new entry:</p>
<p><img alt="Keepass howto" src="/~kzimmermann/images/keepass1.png" /></p>
<p>Add your username and password (bonus points for using the random password generator in the dice icon instead of coming up with one yourself) and click OK.</p>
<p><img alt="Keepass howto" src="/~kzimmermann/images/keepass2.png" /></p>
<p>Your entry is created. Note that nowhere during this process could you add a TOTP to your entry!</p>
<h3>Step 2: set up TOTP</h3>
<p>Now that you have an entry created, you can set up a unique TOTP for it. To do so, right-click your entry and select <code>Set up TOTP...</code></p>
<p><img alt="Keepass howto" src="/~kzimmermann/images/keepass3.png" /></p>
<p>A subwindow opens asking about the "Secret Key." You can get this key by going to the platform you want and seting up two-factor authentication. At that point, lots of online services offer you a QR code, intended to be read by a smartphone "authenticator" application. Here's the thing though: you can receive that code in plain text, too. Usually you'll have to click below it for an option like "I'd like to use a code instead" or something like that. This will give you the contents of that QR as a string of base32-encoded characters (like <code>WBVAZ0nHOK0Ugrc</code>) that you can enter in that KeepassXC window.</p>
<p><img alt="Keepass howto" src="/~kzimmermann/images/keepass4.png" /></p>
<h3>Step 3: use your new TOTP</h3>
<p>Your TOTP is now set. To use it, either click the clock icon in your entry to reveal the six-digit code, or copy it to your clipboard with <code>Ctrl+T</code> so you can paste it in the 2FA field directly.</p>
<p><img alt="Keepass howto" src="/~kzimmermann/images/keepass5.png" /></p>
<h3>Step 4: back everything up!</h3>
<p>Wow, that was easy, right? While you can rejoice now not using your smartphone to log into 2FA-protected services anymore, remember one thing: your TOTP secret is saved to your KeepassXC file. If you lose this file, guess what? <em>You lose your TOTP and cannot log in anymore!</em></p>
<p>Thus, this is what you should do immediately after creating a TOTP entry: <em>back it up</em>. <a href="/~kzimmermann/articles/project_128.html">Go do it</a>. Now. It can be as simple as copying it to a USB drive, or emailing yourself the file (you <em>did</em> set up a good main password, right?). However you choose to do it, back it up and remember to do another one every time you add new credentials to it.</p>
<h2>Security considerations</h2>
<p>With all that said and done and the tremendous amount of convenience received from this maneuver, there are still a lot of controversy in the security circles concerning it. Specifically, the bit of <em>two</em>-factor authentication is questioned when doing this.</p>
<p>The main complaint is this: multi-factor authentication was supposed to be strengthened by the combination of a password ("something you know") with an authentication means stored separately from the former ("something you have"). By storing all these secrets in the same database, at a first glance you're defeating this very concept, reducing the 2-factor authentication to 1-factor. However, I'm not too convinced of this.</p>
<p>For starters, the aforementioned model makes an implicit assumption that you must "know" your password - that is, you store it in your memory. The very act of using a password manager, however, changes this dynamic. You use one main password that in turn unlocks the place where actual passwords are supplied to each one of the services you use. In a sense, if you do this for every service you use online, you don't actually know <em>any</em> of your passwords at all!</p>
<p>Viewed this way, I'd argue that the password manager itself is a sort of two-factor protection mechanism; it requires one secret to access those credentials and, if you use an offline password manager like Keepass, you <em>also</em> need access to that file where those secrets are stored. Unlike the online model (where the service is basically exposed to everyone on the internet), just accessing that file on your computer is already orders of magnitude more complicated. This makes it much like that "something you know plus something you have" model.</p>
<p>Additionally, if your threat model requires such an extreme level of compartmentalization that you cannot afford to have TOTPs stored in the same location as the rest of the credentials, nothing stops you from having two different keepass databases, one for passwords and the other for TOTPs. Need even more compartmentalization? Place the TOTPs database in an external USB drive, or require an extra key file stored externally. You get the point.</p>
<h2>Conclusion - your Freedom thanks you</h2>
<p>The bottom line is that doing TOTPs this way is much better for <a href="/~kzimmermann/articles/dontlikeitcreateit.html">Software Freedom</a> than using whatever "Authenticator" "app" some corporate platform requires, not to mention much more convenient as well. It also makes it much, much easier to back up those TOTPs (if all you got is a smartphone, lose it and basically you've lost your life) and to use it on other devices (why <em>must</em> you have a phone to be able work? What if you couldn't afford one?).</p>
<p>TOTPs aren't the least magical, no matter how much "app" marketing wants you to believe, and thankfully there are a myriad of ways of doing it with Free Software. Here I presented what I use the most, but you can always choose other free software programs or even apps like <a href="https://freeotp.github.io/">FreeOTP+</a> if you absolutely must use a phone.</p>
<p>But please, oh please, do not use a proprietary "Authenticator app..."</p>
<hr />
<p>Do you use TOTP on your Password manager? Do you think doing so has security issues? What would you use instead? Let me know on <a href="https://fosstodon.org/@kzimmermann">Mastodon</a>!</p>
<hr />
<p>This post is number #47 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>When you use a walled garden, the gardeners are your wardens</title>
        <link href="https://tilde.town/~kzimmermann/articles/walled_garden_problems.html" />
        <updated>2020-10-13T07:36:31.829644Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>When you use a walled garden, the gardeners are your wardens</h1>
<p>There was a rather infuriating piece of news a recently in <a href="https://www.theverge.com/2020/10/8/21506995/apple-forced-in-app-purchase-protonmail-ceo-wordpress-iap">The Verge</a> that stated that Apple had forced the Protonmail app developers to include In-app Purchases in its free (both in price and as-in-freedom) app or otherwise be prevented from updating its app in the App Store. And when the developer motioned to at least send its users this warning via email, Apple threatened removing it completely.</p>
<p>As infuriating as this story is, if you've been following the free software philosophy and its concerns with everything non-free, there's nothing at all new at stake here. The App Store is just another example of a digital <a href="https://en.wikipedia.org/wiki/Walled_garden">Walled Garden</a>: a platform that <em>looks</em> open, friendly and inviting for everyone to try, but in reality tightly controls everything that happens inside of it. Oftentimes, these controls are subtle and silent - but feel extremely scary once they're applied to you.</p>
<p>The Protonmail team probably knew about the risks of delegating the distribution to a 3rd party in first place, but like everyone else in there, they have no say or rebuttal when dealing against Apple's policies because, after all, they're the ultimate owners of the platform and can set these rules. Google Play store? Yeah, they would probably do the same thing.</p>
<p>Unfortunately, almost everything in the web today consists of some sort of walled garden - exclusivity, after all, can be a very powerful bait in human society. Outside of the "app stores," the top websites in the world are all walled gardens too: Facebook, Twitter, Pinterest, pretty much every large website and aggregator in the web does it. And users are not even aware of this, since they can always create as many accounts needed there <a href="https://higheredanxiety.files.wordpress.com/2012/05/fremium-model.jpg">for "free."</a></p>
<p>This is the best explanation as to why Walled Gardens experience such huge popularity, and why almost everyone is blind to their problems. As privacy and freedom-conscious users, there is one best thing we can do against them: <strong>avoid</strong>.</p>
<h2>Workarounds and alternatives</h2>
<p>The best alternatives are no doubt using services that are not walled gardens in first place. When content has to be hosted somewhere, opt for things that can <em>federate</em>, that is, be accessible across multiple other platforms, instead of only a single site. A great example of federation is email - different platforms can send and receive emails to each other transparently. </p>
<p>And in 2020, this concept has expanded beautifully, with federated social networks, messaging platforms, audio and video calling and file sharing platforms easily available and easy to use. <strong>Federation takes the walled garden concept and flips it around, keeping freedom first.</strong> <a href="https://tilde.town/~kzimmermann/articles/dontlikeitcreateit.html">Don't like the rules of given platform? Host your own!</a></p>
<p>The choice of hardware you're using is important, however, and as a general rule, mobile devices are more limited in hosting options (that's because their OSes are generally limited in terms of freedom), but you can still have some choice. If you use Android, there is a federated App Store: <a href="https://f-droid.org">F-Droid</a>. Currently not many repositores besides the main F-Droid one are available, but the developers make it clear: <em>anyone</em> can host an F-Droid repo and make software available for other users. Now whether or not the protonmail devs <a href="https://f-droid.org/forums/topic/protonmail/">will actually include them</a>, is anyone's bet.</p>
<p>Still, this happening illustrates well what the dangers of a silent walled garden are to a culture that embraces and breathes Freedom of culture. It has not been <a href="https://www.theverge.com/2020/10/9/21492334/epic-fortnite-apple-lawsuit-restraining-order-unreal-engine">the first</a> time Apple or another "gardener" has made such movements and threats against a software developer, and probably won't be the last. Prevention is the best remedy - and avoiding it is the best way.</p>
<hr />
<p>Update: yikes, looks like it has happened as well in googleland - <a href="https://twitter.com/obra/status/1303442579107831809">K-9 Mail removed from Google Play Store due 'ambiguous' description</a>.</p>
<p>Good thing I never use the Play store, and get all my software from F-Droid anyway.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>For a want of Boot, or: Secure Boot sucks, period.</title>
        <link href="https://tilde.town/~kzimmermann/articles/want_of_boot.html" />
        <updated>2022-10-12T19:07:08.099432Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>For a want of Boot, or: Secure Boot sucks, period.</h1>
<p>Recently, I decided to level up a little in my game of running Linux on a computer with Windows from the USB by buying a full fledged external SSD instead of simple drives. I am no stranger to carrying an OS in my USB drive, as I'm a huge fan of live medium distros like <a href="https://tilde.town/~kzimmermann/articles/rediscovering_puppy_linux_raspup.html">Puppy Linux</a>, and more recently decided to step things up a notch by carrying a frugal install of <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> in a USB drive.</p>
<p>This time, however, I decided to go much deeper: instead of a fragile flash drive, I'd be going with a full external drive, complete with all of the bells and whistles of a frugal installation in an internal hard drive - only it'd be booted via USB. I'm talking complete persistence, full disk encryption, full-fledged session management (sleep, hibernation, etc), and everything that goes together with your standard PC usage. </p>
<p>Except for one detail: the moment the machine power downs and I unplug the USB drive, it would transparently fall back to the internal Windows-run internal Hard Disk. Or the converse: hibernate on internal, wake up on external - transparently. Two OSes, two mediums, two worlds not intersecting save for the user.</p>
<p>This achievement, call it a <em>computing utopia</em> of sorts, however, turned out to be a real nightmare more akin to a war of attrition. After about a month or so of real efforts, I gave up on the utopia part of the mission and chose to go practical instead.</p>
<p>This post details the frustration of this ultimately pointless endeavor so that hopefully in the future something gets update here or there that solves it and we can try again. Or maybe that others reading this account will learn <em>not</em> to try this at home.</p>
<p><strong>TL;DR:</strong> either switch off goddamn Secure Boot every time you want to run Linux from the USB port or run Fedora or Ubuntu from the external disk - but don't expect <em>any</em> support of disk session persistence in the form of hibernation.</p>
<h2>The background, or: my utopic vision</h2>
<p>The motivation to achieve this utopia was pretty simple: my work issued me a laptop with Windows 10, which I later found to come with an unlocked BIOS (thanks, Hacker mindset!) that allowed me to fiddle with the boot and security settings, and switch off things like Secure Boot.</p>
<p>My hacker instinct immediately honed in on this discovery, and I proceeded to test it out by booting my favorite live media from USB. Much to my expections, after I turned off Secure Boot, Linux booted fine from it. I was able to run LiveCDs from Ubuntu and Puppy Linux on it, and eventually settled for a semi-permanent solution using a frugal installation of <a href="https://tilde.town/~kzimmermann/articles/alpine_linux_desktop.html">Alpine Linux</a> directly on a USB drive.</p>
<p>For a lightweight form of usage for just some light browsing and little session persistence, this turned out to work well. With time, though, going through the process of rebooting, changing BIOS settings, choosing to boot from USB media, and the reverse when going back to Windows every time started to become tiresome. Thus I started to look for something else a little more encompassing. </p>
<p>And then the friction started.</p>
<h2>Friction, friction</h2>
<p>Given the several differences in requirements from Windows and other Free Software OSes, I had several levels of friction in pursuit of this endeavour. Amongside hibernation problems, hwclock overriding, session persistence, and some Windows session woes, the undisputed biggest problem was no doubt <strong>Secure Boot</strong>. But let's step through them one by one, at a roughly increasing difficulty scale.</p>
<h3>Hardware clock differences</h3>
<p>The first and most obvious difference between switching Windows and Linux on boot is the hardware (CMOS) clock. Whereas Linux traditionally sets it up to UTC and calculates from there the current time in your timezone set under <code>/etc/timezone</code>, Windows sets it up always to local time. </p>
<p>This means that when you switch between the two, depending on how much "knowledge" your distro has of this difference, you will likely suffer some skew between the two (unless you live in the UTC+0 timezone). Making matters worst, oftentimes a daemon like <code>ntpd</code> will "correct" the damage by sourcing the appropriate time online and then "conveniently" adjusting the CMOS clock to UTC on your behalf without asking. As a result of this, when you end your session on Linux and go back to Windows, your time there will be skewed again and - unless you have administration access - you can't change it! Gotta reboot, go to BIOS and change it from there. </p>
<p>Thankfully, some noob-friendly distros like Fedora or Ubuntu recognize out of the box that the current time of the CMOS clock may be the local and used by an installation of Windows already, and keep it as it is for that matter. Alpine Linux, however, doesn't, so you have to do some ajustments in the OpenRC startup scripts to explicitly state that the CMOS clock is set to local time instead of UTC.</p>
<p>This isn't too bad, though. One file edited and your clock skew problems are solved forever. The other problems, on the other hand...</p>
<h3>Who's got them keys?</h3>
<p>Enter <strong>Secure Boot</strong> stage center, the prima donna of Microsoft's "security" concerns. believe if or not but for the longest of times, I didn't actually think it was <em>that</em> much of a problem. Like, sure, <a href="https://tilde.town/~kzimmermann/updates/20210610_1302.html">it limits severely what OSes you can run</a>. Sure, the ones that work come only with systemd. But even then, the combination of Secure Boot with Linux can work decently enough for a casual computer user.</p>
<p>Enter this very project and now the whole thing is a downright disaster.</p>
<p>I don't want to carry something as large as Ubuntu or Fedora with me in an external-boot drive. I'm already limited by the bandwidth of USB and the disk space of the medium; let me use something lighter and faster, like Alpine. Actually, to hell with the "lighter and faster." Let me use whatever OS I want - this is true freedom! But woe are you if you want to boot <em>some</em> OSes together with Secure Boot. </p>
<p>The big TL;DR here is that you can't boot any OS whose Kernel hasn't been signed with a set of Secure Boot-approved keys, and this excludes the vast majority of Free Software OSes. So, what can you do?</p>
<p>The quick (and in hindsight, best and only) solution is to downright disable secure boot, and then run whatever OS you want in it. But carrying my faith on my project, I naively believed that there was still a way to coexist in this; Windows and Linux booting and working under the same Secure Boot environment. </p>
<p>So if you want to go that way, a few large, enterprise-backed Linux projects paid the piper to end up with a package known as <code>shim</code>, which essentially allows the Linux OS to boot even under the Secure Boot environment. Unfortunately, this limits the choice to essentially only three distributions: Fedora, OpenSUSE or Ubuntu (and no, Ubuntu derivatives such as Xubuntu or Mint don't use <code>shim</code>)</p>
<p>I chose Fedora both because of the <code>shim</code> support and because I was already due to trying it out from a long time. State of the art software? Supporting revolutionary initiatives like like Pipewire or Wayland support? Sign me up, pal.</p>
<p>But then...</p>
<h3>Can't hibernate in peace</h3>
<p>Even though Fedora was outstanding in terms of usage and software availability, there was one key thing missing: a way to persist sessions across power downs (when I would unplug the external drive and go back to booting Windows). In other words, how do we hibernate the session to disk?</p>
<p>Fedora unfortunately failed me here because together with it's bleeding edge awesomness, it also comes with an interesting quirk: it uses ZRAM for swap instead of disk space. Sure, ZRAM might be more useful in terms of performance, but the problem is that <a href="https://www.ctrl.blog/entry/fedora-hibernate.html">you can't hibernate RAM into itself!</a></p>
<p>Alas, Fedora is out so this boils down the choice to Ubuntu only. Re-install the OS and we should be good to go. Let's check:</p>
<ul>
<li>Swap partition created (only 2GB? Hmmm)</li>
<li>Swap file can be created and activated to house remainder of the session.</li>
</ul>
<p>Guess we're good to go at last! So let's do some work, load some stuff into memory, and put the machine to hibernation.</p>
<pre><code>$ loginctl hibernate
hibernate verb not supported.
</code></pre>
<p>Ok, this is a systemd machine after all.</p>
<pre><code>$ systemctl hibernate
hibernate verb not supported.
</code></pre>
<p>What the hell? Searching around a little, I found a suggestion to use <code>journalctl</code> instead to inspect the logs. Quirks of systemd, maybe? But let's see it. i</p>
<p>Alas, this is what is going on in the backstage (output of <code>journalctl</code>), with some emphasis added by me:</p>
<p>Oct 07 18:47:10 zoomerboi su[4167]: pam_unix(su-l:session): session opened for user root(uid=0) by vman(uid=1000)
Oct 07 18:47:14 zoomerboi kernel: Lockdown: systemd-logind: hibernation is restricted; <strong>see man kernel_lockdown.7</strong></p>
<p>Ok, getting closer. In fact, why not follow the log's advice and RTFM? See, after finding the <a href="https://man7.org/linux/man-pages/man7/kernel_lockdown.7.html">manual page on kernel lockdown</a>, we at last reach the crux of the matter:</p>
<blockquote>
<p>On an EFI-enabled x86 or arm64 machine, lockdown will be automatically enabled if the system boots in EFI Secure Boot mode.</p>
</blockquote>
<p>And just like that, everything becomes crystal clear. In other words, <strong>when you run with Secure Boot enabled, you can't hibernate to a swap partition that hasn't been signed for it.</strong></p>
<p>Ok.</p>
<p>Hold up.</p>
<p>Calm down.</p>
<p>Deep breath.</p>
<p>Count until 10.</p>
<p>(...)</p>
<p>Nah, fuck it. It's a lost cause.</p>
<p><img alt="Flipping a table out of rage" src="https://tilde.town/~kzimmermann/images/table_flip.jpg" /></p>
<h2>Conclusion</h2>
<p>Fuck Secure Boot. </p>
<p>In short, if you want to use your computer in a happy and joyful manner, do yourself and your computer a favor and disable secure boot, as to never switch it back on again. And then use your computer however you want it. Otherwise, use either Fedora or Ubuntu with this restriction - but don't expect to have any sort of hibernation persistance there.</p>
<p>No ambitious software project is worth my sanity, so for now I'll do what works for me and switch Secure Boot on and off manually depending on what OS I'll use and be happy at that.</p>
<p>Some security, huh. All for a want of boot.</p>
<p><img alt="fuck secure boot" src="https://tilde.town/~kzimmermann/images/fsecureboot.jpg" /></p>
<hr />
<p>Have you ever dual booted Linux and Windows in a machine that has Secure Boot enabled? How was the experience and the overall usability of it? Let me know in <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
<hr />
<p>This post is number #37 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>What has happened to basic computer knowledge?</title>
        <link href="https://tilde.town/~kzimmermann/articles/what_happened_basic_computing.html" />
        <updated>2021-10-09T17:00:42.513883Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>What has happened to basic computer knowledge?</h1>
<p>$FAMILYMEMBER wrote to me this morning asking me if I could help with a backup of their files.</p>
<p>I wrote back: sure, what's the problem with the backup or the files?</p>
<p>$FAMILYMEMBER says they don't want to lose the files that have been accumulating in their phone. What if something happens / I lose the phone / it breaks or bricks / etc you know.</p>
<p>I say sure, <a href="https://tilde.town/~kzimmermann/articles/project_128.html">that's good thinking</a>. Now, do you have an external Hard Drive anywhere in your house? I remember there was one there. So yeah, plug that in your computer, plug your phone as well and copy your files from the phone to the drive. Finito!</p>
<p>They say aren't these external HDDs unreliable, and didn't we lose data in our older ones a time or two before? They ask should they really do that, isn't there a more reliable way?</p>
<p>I say ok well, as long as you properly eject after use, do not move it while it's on and spinning, and keep it powered off when not in use and safely store it somewhere, the risk of loss is actually quite small.</p>
<p>They say anyway, my backup strategy sounds too complicated for them to do.</p>
<p>...</p>
<p>They say yeah you know, this whole plugging in / plugging out and copying things from one device to the other sounds complicated. They ask how can they be sure they won't mess it up.</p>
<p><em>WHAT.</em></p>
<blockquote>
<p><strong>Note:</strong> $FAMILYMEMBER is in their early 30s who grew up doing the same computer tasks as <a href="https://tilde.town/~kzimmermann/articles/first_starting_linux.html">me before Linux</a> for things like schoolwork or gaming and browsing. They also use computers extensively at work daily so it's not a case of "dealing with Grandma here."</p>
</blockquote>
<p>I ask if they've never copied something to a pendrive before (knowing very well the answer should be 'yes, I've done it before').</p>
<p>They say man give me a break: it's been ages since I've had to copy things off to a USB thing. These days I just send things via whatsapp to other people or email it to myself if I need to open them in another device. </p>
<p><em>They email their personal files to themselves even at home!</em></p>
<p>I am as shocked as I am frustrated at this point, so I tell $FAMILYMEMBER that clearly emailing thousands of these files to themselves is not a practical solution. So see, I've heard there are some solutions to back up your data to the... </p>
<p><em>dry gulp</em>.</p>
<p>... cloud.</p>
<p>They are surprised: YOU MEAN TO SAY THERE'S AN <em>APP</em> FOR THAT???</p>
<p>I say, sheepishly admitting defeat: yes..?</p>
<p>They let out a huge sigh of relief: great to hear, this should be pretty easy to do then, right?</p>
<p>I end up sending them the mega.nz link.</p>
<hr />
<p>So let me get this straight: copying files between two devices via USB through your own computer is <strong>too hard and unreliable</strong>, but:</p>
<ul>
<li>Using a third party cloud service which you don't pay for;</li>
<li>Signing up to such cloud service using your personal data (email, etc) thus having yet another place where it is liable to be stolen through a hack or sold to someone else;</li>
<li>Accessing such cloud service via a proprietary app in a proprietary OS laden with insecurity and surveillance;</li>
<li>Having a situation where Internet Connectivity is required or literally you will not access anything and;</li>
<li>Taking hours (instead of minutes) to complete a backup because of the intentionally throttled connection of the provider;</li>
</ul>
<p>Is the <strong>easy and safe alternative</strong> today?</p>
<p><img alt="a frustrated man in the computer" src="https://tilde.town/~kzimmermann/images/facepalm.jpg" /></p>
<p>Boy, I knew that OS designers were trying to make things simpler for the end user. But this situation today has really hit me in the head about how much dumber the average user is getting about knowing how to use a computer.</p>
<p>Yeah, I know I could go on and elaborate more on this sad reality, but it's late at night here, and I just wanted to get this rant out of my chest right now. </p>
<p>But wow, how much lower can we go...</p>
<hr />
<p>This post is number #28 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Fuck WhatsApp</title>
        <link href="https://tilde.town/~kzimmermann/articles/whatsapp_sucks.html" />
        <updated>2020-10-02T03:57:15.956224Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Fuck WhatsApp</h1>
<p>WhatsApp fucking blows. It's another one of those evils that come from Facebook Inc that do nothing to contribute, but rake in more profits off unsuspecting people's backs. I could say the same thing about pretty much every other <a href="https://tilde.town/~kzimmermann/articles/messaging.html">centralized messaging platform out there</a>, but WhatsApp is the biggest and thorniest today.</p>
<p>I hate WhatsApp. Call it a personal bias, but I've associated it with bullshit and peer pressure since the first day I had to install it on my phone. Perhaps I was a little late to the smartphone game, but I had my first smartphone only in about late 2010, and even then did not touch Android 4 until as late as 2014. And when people kindly pressured me into installing that green messenger thing, I immediately hated it. </p>
<p>Next thing I know, I got invited to "work" groups that just spammed random bullshit, and family groups with members to whom I had not spoken to in decades and wished to keep it that way.</p>
<p>"Oh, but you can text, like, for free!"</p>
<p>"Oh, but there are groups that allow you to keep in touch."</p>
<p>"Oh, but this is better than email to pass announcements to all our team at the office."</p>
<p>"Oh, but you can sign up straight away with your phone number, no need to create an account."</p>
<p>Wait, <em>those</em> are your selling points to use that app? Well, they blow too. There are countless other alternatives that you can use today (and even back then!) that will do all of those, but in a better or freer form, without arbitrary restrictions.</p>
<p>Perhaps the only thing that I hate more than WhatsApp is the fact that so many people use it, don't know any better, and therefore conclude that there's a hard dependency that can never be undone. It's like those people I talked to that cling to Facebook even though they admittedly have no use for it anymore, but "all my friends and contacts are there, so I can't quit it."</p>
<h2>Why it's evil</h2>
<p>WhatsApp breeds a dependency on our smartphones and lack of control on what platforms we'd like to use, and the data we choose to share. Even the "WhatsApp Web" interface requires a smartphone to work. There's no choice on which platform we choose to use, and perhaps most importantly, <strong>you must have a phone in order to use it</strong>. </p>
<p>Sign up with an email address and a password, like everything else on the web? Nope, authentication via an SMS code only! What, can't afford or don't want to have one? Man up, you hobo, time to buy a new one! And make sure it's brand new too, <a href="https://www.gizchina.com/2019/09/30/whatsapp-is-ending-support-for-older-smartphones-running-ios-8-and-android-gingerbread/">old smartphones just won't cut it</a>.</p>
<p>Cell phones might be convenient sometimes, but they blow in terms of privacy and security. SIM card cloning is a problem as old as the technology itself, and is much easier than you think. In fact, <a href="https://duckduckgo.com/?t=canonical&amp;q=SIM+Card+cloning&amp;ia=web">a simple search of that query</a> reveals more tutorials on how to perform cloning than information on what the problem actually is. And the number of issues arising from the ease of this is increasing, such as with <a href="https://advox.globalvoices.org/2016/05/02/is-telegram-really-safe-for-activists-under-threat-these-two-russians-arent-so-sure/">activists using phone-based services and getting their accounts hacked from a cloned SIM card</a>.</p>
<p>Abstaining from the Cell Phone-specific problems, there are still problems arising from the centralized structure of the messenger. The service does not federate, and you must rely on whatsapp.com servers in order to be able use the app at all. As available as it is, though, the service does suffer from <a href="https://www.msn.com/en-us/news/technology/whatsapp-hit-by-outage-leaving-users-unable-to-send-or-receive-messages/ar-BB16Jqcy">some sort of outage</a> from time to time, either by their own infrastructure, <a href="https://solutionsreview.com/endpoint-security/massive-ddos-attack-breaks-the-internet/">when some sort part of their dependency chain fails</a>, or even when <a href="https://techcrunch.com/2016/07/19/whatsapp-blocked-in-brazil-again/">somebody in power</a> arbitrarily <a href="https://www.iol.co.za/sport/soccer/africa/zimbabwe-government-blocks-whatsapp-2042299">decides to censor it</a>.</p>
<p>You might think that it's possible to circumvent most of these using a VPN or proxy, but what you can't do, however, is avoid their data silo: when you use it, all your data essentially belongs to WhatsApp. Most people play the "I don't care" card at this point, and you'd think that Facebook also bets on that belief. However, the user privacy concerns must have been growing strong since in 2016 they decided to impress the world and partner up with Signal, <a href="https://signal.org/blog/whatsapp-complete/">implementing end-to-end cryptography in all chats</a>. Game over, mass surveillance! Privacy has finally won.</p>
<p>Or has it? As much as the cheerleading crowd in OpenWhisperSystems would like to assert, WhatsApp remains a black box of nonfree software, with not much but a "pinky promise" that they will not spy on the users through metadata, undo the cryptography or silently add a backdoor upstream, where client-side auditing won't catch them. Not to mention that to trust WhatsApp means having to trust the platform on which it's based on, and the entire smartphone game relies on passive surveillance to remain profitable. You want to promise us privacy and security? <strong>Release the freaking code.</strong></p>
<p>Last but not least, WhatsApp breeds smartphone addiction. "Who has messaged me? Is that my cell phone that buzzed? I haven't checked my phone in the last 5 minutes, who might have got in touch with me?" If you ever caught yourself asking these kinds of questions, guess what: you have an addiction. And WhatsApp isn't doing any good to that either, especially when people are <a href="https://hbr.org/2017/08/what-one-company-learned-from-forcing-employees-to-use-their-vacation-time">pressured to use it even for work-related communication</a>. Don't have a company smartphone? Don't worry, your boss now can nag you regardless through your personal device and phone number!</p>
<h2>Alternatives</h2>
<p>First and foremost: <strong>stop using WhatsApp</strong>. Don't reduce usage, stop it. Boycott it, uninstall it, stubbornly refuse to reinstall it.</p>
<p>Not using it <em>is</em> an alternative, and probably the best you can do. Boycott it together with anything else Facebook-owned. Not only you'll be making it harder for the people you care to use the damn thing, you'll also be enlightening them to the wonderful and plentiful alternatives there are today.</p>
<p>And then you can show them these <a href="https://tilde.town/~kzimmermann/articles/messaging.html">wonderful alternatives</a>:</p>
<ul>
<li>Best solution: XMPP through the Conversations app, or Gajim on the desktop. Everything else is second.</li>
<li><a href="https://matrix.org">Matrix</a> for a large community such as a work chat.</li>
<li><a href="https://jitsi.org">Jitsi</a> for voice and video communication</li>
</ul>
<p>Oh, and do <em>not</em> use Signal: <a href="https://github.com/LibreSignal/LibreSignal/issues/37">they won't open to federation</a>, and are subject to the same outage issues as WhatsApp (maybe even more, since they're much more associated with activism).</p>
<p>There is no loss of value in terms of communication compared to WhatsApp, only enormous gains in freedom. Freedom of service choice through federation, personal data through encryption and freedom of platforms.</p>
<p>Seriously, it's 2020. WhatsApp might have seen glory days before, but today it's no different than the rest of the technology. It's time to be truly free again.</p>
            </div>
        </content>
    </entry>

    <entry>
        <title>Wrestling with the most stubborn and locked-down machine I've ever used</title>
        <link href="https://tilde.town/~kzimmermann/articles/wrestling_with_locked_machine.html" />
        <updated>2022-03-03T04:27:18.194995Z</updated>
        <content type="xhtml">
            <div xmlns="http://www.w3.org/1999/xhtml">
                <h1>Wrestling with the most stubborn and locked-down machine I've ever used</h1>
<p>Tonight I won a month-long battle (ok, with a few pauses in between) against an incredibly locked-down machine belonging to my parents that refused to have Linux installed on it. Careful readers might remember the thread I posted about it on Mastodon. There, I ranted about it and for some tips on how to work around its locks. In the end, though, not-exciting-at-all brute force prevailed. </p>
<p>Here's how it went, simplified:</p>
<ul>
<li>Nice, dad finally wants to try Linux, but isn't ready to give up on Windows quite yet? That's cool, we can just dual boot. Let's burn the Linux Mint ISO to this USB.</li>
<li>Huh, this machine isn't recognizing the USB to boot. Ok, I can go to BIOS and change the boot order.</li>
<li>Damn, this BIOS is password-protected. How can I unlock it? Do I have to crack it?</li>
<li>Oh, so dad tells me there's a dummy password in there, <code>abc</code>. Sweetly enough, it's accepted. Is it really that easy?</li>
<li>Whaaaaaat?! What sort of BIOS does <em>not</em> allow me to change the boot order?! I typed the password, dude! Is this like an "admin" password versus "user" password thing?</li>
<li>Aight, screw the BIOS password, I'm going to reset this thing the hard way. Just peel off the bottom lid, take away some components and I should be able to shock this thing back to factory reset...</li>
<li>... Except not. Even after a short-circuit the damn CMOS remains intact I guess? At least that's what it looks like since <code>abc</code> is still here.</li>
<li>Alright, this has been long and hard enough and I've lost my patience. We're gonna do surgery. </li>
<li>Extract its hard drive and put it in another machine with same arch. </li>
<li>Boot that other machine containing original hard drive from Linux Mint USB. Perform installation a-la dual boot with the installer.</li>
<li>After completion, power down helper machine, extract hard drive, put it back to stubborn machine.</li>
<li>Power on and without a doubt both OSes are now bootable! Suck it, silly BIOS!</li>
</ul>
<p>Oh whew... what a downright wrestling match it has been. But the bottom line is that the machine has been liberated and my dad can now test drive it - which he did and told me how impressed with Linux he was, how simple to use it was and etc. Linux wins again!</p>
<p>(Though it was indeed some sort of pyrrhyc victory, at least for my time!)</p>
<hr />
<p>Have you ever had to work with a very freedom-unfriendly machine like this one? How was your experience? What was the final solution Let me know in Mastodon!</p>
<hr />
<p>This post is number #32 of my <a href="https://100daystooffload.com/">#100DaysToOffload</a> project. Follow my progress through <a href="https://fosstodon.org/@kzimmermann">Mastodon!</a></p>
            </div>
        </content>
    </entry>

</feed>

