Fryboyter

Install specific version of Composer at Uberspace

When I just tried to update a Wallabag installation at Uberspace.de, I got the following error message.

Your lock file does not contain a compatible set of packages. Please run composer update.

  Problem 1
    - Root composer.json requires composer < 2.3, found composer[2.3.10] but it does not match the constraint.

The recommendation to run composer update does not help in this case. The problem is that Uberspace has version 2.3.10 of Composer installed. Wallabag, however, currently requires Composer in a version < 2.3. So far I had with Uberspace actually only cases where certain packages were too stable (one could also say too old).

So, at least temporarily, a workaround is necessary. On the Uberspace in question, I ran the following commands.

wget -qO composer-setup.php https://getcomposer.org/installer
php composer-setup.php --install-dir=/home/$USER/bin --filename=composer --version=2.2.17

The first command downloads the installer of Composer. The second one downloads version 2.2.17 of Composer and saves it in the ~/bin directory with the filename composer.

I was then able to update Wallabag without any problems. Since ~/bin is part of $PATH and Composer is not necessary for using Wallabag, I renamed the file to composer.old to avoid problems with other installed software that needs a newer version.

OSBN | Linux

It is also possible to use Arch Linux if you have a life

Yesterday, an article was published on gnulinux.ch regarding OS decision trees. This is, hopefully, not meant to be completely serious. Nevertheless, I would like to give my Senft (yes, in francs it is Senft not Senf) regarding Arch Linux.

There are many myths about Arch Linux. And therefore also a lot of bullshit. For example, that you only learn something if you use Arch Linux. Yes, that is bullshit. I acquired a large part of my knowledge about Linux under Mandrake / Mandriva, for example. Which was the Ubuntu of that time, so to speak. Since I’ve been using Arch, a lot has been added, of course. But not because I use Arch, but because I had to solve certain problems or fulfil certain tasks. But anyway, that’s not the point.

The point is that many claim that Arch Linux is used by people who have no real life. For example, because you have to fix something after almost every update. Bullshit!

I have been using Arch Linux since about 2010 on several computers with different configurations. Both in terms of hardware and software. And for the life of me I can’t say when the last time there were problems because of an update. I even use Arch Linux for servers in the private sector.

I don’t install updates several times a day. On some computers I even install updates only once a week because I only use them on weekends.

Before an update, I check whether something has been published at https://archlinux.org/news/ that affects my installations. To automate this, I use Informant. If this is the case, I take it into account. Without ifs and buts.

And from time to time I synchronise my configuration files with the Pacnew files (https://wiki.archlinux.org/title/Pacman/Pacnew_and_Pacsave). I also regularly clear pacman’s cache automatically using a hook (https://wiki.archlinux.org/title/pacman#Cleaning_the_package_cache).

Otherwise I just use Arch Linux. Yes, seriously. Arch Linux is basically a normal distribution. Like OpenSuse, for example. It’s not an operating system for the elite. Or for people without a real life. It’s just a distribution where some things work differently than other distributions. Like the installation, for example.

This article is not meant to encourage as many users as possible to install Arch Linux. No. Everyone should use the distribution he / she / it thinks is right. This article is only meant to demystify Arch Linux and thus inform potential users. Personally, I couldn’t care less which distributions other users use. And I therefore have no problem with the article from gnulinux.ch.

OSBN | Linux

Automatically update the Pkgbuild file and install it

Under Arch I have installed a few packages whose updates are sometimes offered with a time delay. For example, because the respective package maintainer does not have the necessary time or because he wants to wait for the first minor release. Hugo is often such a package.

Therefore I often install the current version myself. For this I created the directory ~/pkgbuilds/hugo/ and stored the PKGBUILD file in this directory to install the package. In the case of Hugo, this currently looks like this.

pkgname=hugo
pkgver=0.101.0
pkgrel=1
pkgdesc="Fast and Flexible Static Site Generator in Go"
arch=('x86_64')
url="https://gohugo.io/"
license=('Apache')
depends=('glibc')
makedepends=('go' 'git')
optdepends=('python-pygments: syntax-highlight code snippets'
            'python-docutils: reStructuredText support')
source=(${pkgname}-${pkgver}.tar.gz::https://github.com/gohugoio/${pkgname}/archive/v${pkgver}.tar.gz)
sha512sums=('541d0e04e868845119f2b488fd53b92929ea4dc08685d438a2914b41586e204588b193522013e8eed908dc0c3fbc2714aefb1afad0beae875d57d71aadc59c70')

build() {
  cd "${srcdir}"/${pkgname}-${pkgver}
  export CGO_CPPFLAGS="${CPPFLAGS}"
  export CGO_CFLAGS="${CFLAGS}"
  export CGO_CXXFLAGS="${CXXFLAGS}"
  export CGO_LDFLAGS="${LDFLAGS}"
  export GOFLAGS="-buildmode=pie -trimpath -mod=readonly -modcacherw"
  go build -tags extended

  ./hugo gen man
  ./hugo completion bash > ${pkgname}.bash-completion
  ./hugo completion fish > ${pkgname}.fish
  ./hugo completion zsh > ${pkgname}.zsh
}

package() {
  cd "${srcdir}"/${pkgname}-${pkgver}
  install -Dm755 "${pkgname}" "${pkgdir}"/usr/bin/${pkgname}
  install -Dm644 LICENSE "${pkgdir}"/usr/share/licenses/${pkgname}/LICENSE

  install -Dm644 "${srcdir}"/${pkgname}-${pkgver}/man/*.1  -t "${pkgdir}"/usr/share/man/man1/
  
  install -Dm644 ${pkgname}.bash-completion "${pkgdir}"/usr/share/bash-completion/completions/${pkgname}
  install -Dm644 ${pkgname}.fish "${pkgdir}"/usr/share/fish/vendor_completions.d/${pkgname}.fish
  install -Dm644 ${pkgname}.zsh "${pkgdir}"/usr/share/zsh/site-functions/_${pkgname}
}

If a new version was published I enter the new version in the PKGBUILD file at the line pkgver=. Then I execute the commands updpkgsums PKGBUILD, makepkg -cirs PKGBUILD -noconfirm and rm – .tar. in the directory where the file is located.

The first command downloads the archive file containing the source code, creates the checksum of the file, and adds it to the PKGBUILD file. The second command uses the instructions in the PKGBUILD file to build and install the package. The last command deletes both the archive file with the source code and the created package.

Because I am already quite skilled at this, this doesn’t take a minute. However, I still want to automate the process. Therefore I have created a function for myself.

updpkgbuild () {
	new_ver="$1"
	sed -E "s#(pkgver=).*#\1$new_ver#" -i PKGBUILD
	updpkgsums PKGBUILD
	makepkg -cirs PKGBUILD --noconfirm
	rm -- *.tar.*
}

With this I only need to run for example updpkgbuild 0.102.0 in the directory of the PKGBUILD file and version 0.102.0 of the package will be installed automatically. Of course, the whole process only works if only the version as well as the checksum needs to be updated. But this is mostly the case.

I created this function for the zsh. Whether this also works in other shells like bash or fish, I can’t say.

OSBN | Linux

Combine Pi-Hole with Unbound

Pi-Hole is a so called “DNS-Sinkhole” for the own network with which you can filter out unwanted advertisements from websites. Yesterday I installed it on a unused Raspberry Pi.

Since Pi-Hole does not officially support Arch Linux and therefore problems may occur, I installed Raspbian without further ado. Funnily enough there is a package for Pi-Hole for Arch in the AUR while you have to install this package for Rasppian via “curl -sSL https://install.pi-hole.net | bash”. This is the official way, by the way, even if you normally warn against such instructions. The whole process is quite unspectacular in the end and works simple in the end.

During the installation I had the idea to combine Pi-Hole with Unbound. Unbound used to be my caching DNS resolver, so I don’t need DNS like 8.8.8.8 (Google).

So first install Unbound with “sudo apt install unbound”.

With the command “sudo wget -O /var/lib/unbound/root.hints https://www.internic.net/domain/named.root" we now download the list of DNS root servers.

Now it’s time to configure Unbound. We create the file /etc/unbound/unbound.conf.d/pi-hole.conf and fill it with the following content.

server:
    verbosity: 1
    port: 12345
    do-ip4: yes
    do-udp: yes
    do-tcp: yes

    # Wenn IPv6 genutzt wird auf yes ändern
    do-ip6: no

    # Liste der Root-Server
    root-hints: "/var/lib/unbound/root.hints"

    # Vertraut glue nur wenn innerhalb von servers authority
    harden-glue: yes

    # Um DNSSEC zu deaktivieren auf no setzen
    harden-dnssec-stripped: yes

    # Großbuchstaben um Spoof zu erschweren
    use-caps-for-id: yes
    
    # EDNS Buffergroeße
    edns-buffer-size: 1472

    # TTL für Cache
    cache-min-ttl: 3600
    cache-max-ttl: 86400

    # Oft genutzte Einträge bleiben im Cache
    prefetch: yes

    # Anzahl der Threads (1 reicht fuer kleines LAN)
    num-threads: 1
    
    # Cache-Speicher. rrset sollte doppelt so groß wie msg sein
    msg-cache-size: 50m
    rrset-cache-size: 100m
    
    # UDP schneller mit Multithreading (Tux only).
    so-reuseport: yes
    
    # Stellt sicher, dass Kernel-Buffer groß genug ist wenn Traffic stark ansteigt 
    so-rcvbuf: 1m

    # IP werden nicht aufgelöst
    private-address: 192.168.1.1/16
    

If you can’’t get anything out of my comments, you can read a more detailed description at https://www.unbound.net/documentation/unbound.conf.html. At the moment only the third and the last line are important. In the third line, the port is specified on which Unbound should listen. And in the last one you enter your address range for your own LAN.

With “systemctl start unbound.service” you test if unbound starts without problems. With “systemctl enable unbound.service” Unbound is started automatically.

Now enter Unbound at Pi-Hole as DNS. Just enter $PI-IP/admin in your browser (instead of $IP-IP use the IP under which the Raspberry is reachable in the LAN). Now the dashboard of Pi-Hole should be displayed. There we click on Login on the left side and log in. Now we select Settings -> DNS on the left. Here I noticed that in the stable version of Pi-Hole you can enter the IP of your own DNS, but no port. After some Google-Fu I found out that it only works with the current beta version. Since I have nothing to lose, I switched to the current beta version with the following commands.

echo "FTLDNS" | sudo tee /etc/pihole/ftlbranch
pihole checkout core FTLDNS 
pihole checkout web FTLDNS

Now you can enter 127.0.0.1#12345 under Settings -> DNS (instead of 12345 you use the port specified in the configuration file (# is intentional and must be specified here).

Now we have to tell the devices in the network to use the Pi-Hole as DNS. To do this we either store the IP of the Raspberry Pi in the router or adjust the DNS settings on the respective devices (/etc/resolv.conf, netctl profiles etc.).

As a conclusion after about 24 hours with the combination of Pi-Hole and Unbound I can make the following statements so far:

  • Apt is damn slow compared to pacman
  • I’ll never make friends with a distribution that uses sudo for every crap, so I’ll probably activate the root account or install Arch again.
  • If Unbound doesn’t know the IP of a website yet, there will be a short delay until the page is displayed. After that everything goes very fast.
  • If you look at the statistics of Pi-Hole, you will notice how much crap is filtered. With me it is up to 30.4 percent of all inquiries which disappear in the hole.

As far as the statistics are concerned, I noticed that they are updated every minute. Since I don’t need current statistics and to avoid unnecessary write operations on the memory card I created the file /etc/pihole/pihole-FTL.conf and entered DBINTERVAL=60.0 there. Now the write process is still done every 60 minutes. This is absolutely sufficient for me, since the filter process continues as normal.

But this solution has a disadvantage for me. For various things like geo-blocking I use a VPN provider. The current provider (which will probably also apply to most other providers) unfortunately does not allow you to enter your own DNS to prevent so-called DNS leaks. So I will continue to use uBlock Origin in my browser. Especially since Pi-Hole can’t filter some things like these annoying cookie hints.

Linux | OSBN