[2018-05-07] Clean pacman package cache automatically in archlinux

To clean up the pacman package cache once you can run:

# paccache -rk2

This will remove all cached packages but keep the 2 versions of any installed one. The arch docu has more useful information.

# mkdir -p /etc/pacman.d/hooks/
# vim /etc/pacman.d/hooks/remove_old_cache.hook

and fill in the following

[Trigger]
Operation = Remove
Operation = Install
Operation = Upgrade
Type = Package
Target = *

[Action]
Description = Keep the last cache and the currently installed.
When = PostTransaction
Exec = /usr/bin/paccache -rvk2

I read about this solution in the arch formus

When you update your system you can see that the hook is running. Your output should be similar to this:

# pacman -Syu
...
( 8/13) Keep the last cache and the currently installed.
removed '/var/cache/pacman/pkg/lib32-glibc-2.26-11-x86_64.pkg.tar.xz'
removed '/var/cache/pacman/pkg/linux-headers-4.16.5-1-x86_64.pkg.tar.xz'
removed '/var/cache/pacman/pkg/glibc-2.26-11-x86_64.pkg.tar.xz'
removed '/var/cache/pacman/pkg/gcc-7.3.1+20180312-2-x86_64.pkg.tar.xz'

[2018-01-05] How to fix an expired gpg key to be able to run a system update on arch linux again

When I run my first update in the new year, I had the problem that one of the gpg keys expired, and I couldn't update my system.

$ sudo pacman -Syyu
error: pkgbuilder: signature from "Chris Warrick <kwpolska@gmail.com>" is unknown trust
:: Synchronizing package databases...
 core                                                                                                 126.8 KiB  1527K/s 00:00 [#############################################################################] 100%
 extra                                                                                               1639.8 KiB  10.0M/s 00:00 [#############################################################################] 100%
 community                                                                                              4.3 MiB  18.0M/s 00:00 [#############################################################################] 100%
 multilib                                                                                             168.6 KiB  23.5M/s 00:00 [#############################################################################] 100%
 pkgbuilder                                                                                           846.0   B  0.00B/s 00:00 [#############################################################################] 100%
 pkgbuilder.sig                                                                                       310.0   B  0.00B/s 00:00 [#############################################################################] 100%
error: pkgbuilder: signature from "Chris Warrick <kwpolska@gmail.com>" is unknown trust
error: failed to update pkgbuilder (invalid or corrupted database (PGP signature))
 sublime-text                                                                                        1080.0   B  0.00B/s 00:00 [#############################################################################] 100%
 sublime-text.sig                                                                                     543.0   B  0.00B/s 00:00 [#############################################################################] 100%
error: database 'pkgbuilder' is not valid (invalid or corrupted database (PGP signature))

The solution was to refresh all keys with

$ sudo pacman-key --refresh-keys

[2017-10-08] fixed netctl connection problem

Since a while i couldn't connect via netctl to a wireless network. But since a manual connect like this worked:

wpa_supplicant -B -i wlp4s0 -c <(wpa_passphrase "my_ssid" "mypassword")

I didn't bother too much. Today I read through the documentaion again and found a suggestion if the connection fails. I added this line manually to /etc/netctl/my_profile_name

ForceConnect=yes

And a sudo netcl start home was successful \o/ I removed the line again and it still woks, strange but who cares.

[2017-08-04] kill blocking or frozen ssh connection

I googled so many times now for this combination, so add it here for future reference.

Too kill a frozen ssh connection, hit those 3 keys after another.

[ENTER] [`] [.]

[2017-05-24] mount an encrypted partition in a terminal

Here is a way to mount a encrypted partition that was created with gnome-disks on a system without gnome from the commandline.

mount encrypted partition cli

$ udisksctl unlock --block-device /dev/sda3
$ udisksctl mount --block-device /dev/mapper/luks-3c6da966-4101-470f-b7e5-cb385f93fd1f

[2017-02-01] run pygame for python3 in docker

To run Xorg application in docker you just need to mount the Xorg socket into the container. First you need to disable access control so that the app from inside the container can connect to the Xorg socket with xhost +

After that you can run the container and mount the socket:

docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -v $PWD:/home -e DISPLAY=unix$DISPLAY \
--device /dev/snd --name pygame3 olafgladis/python3-pygame python3 /home/app.py

The important part is -v /tmp/.X11-unix:/tmp/.X11-unix. If you want to write your pygame application in python3 you don't have to compile pygame yourself, and just use this docker container. The Dockerfile for it can be found here. For the a small demo I provided also a small snake app and a bash script to start in inside the container.

Afaik this will only work on linux.

[2016-09-03] A process pool in python

When you need to spawn a lot of processes, so many that you don't want to spawn them at the same time this PopenPool can help you.

    def run(self,):
        while self.running_tasks or self.upcomming_tasks:
            while self.upcomming_tasks and len(self.running_tasks) < self.pool_size:
                self.consume_task()
            still_running_tasks = deque()
            while self.running_tasks:
                task = self.running_tasks.popleft()
                if task.poll() is None:
                    # task is not finished yet
                    still_running_tasks.append(task)
                else:
                    self.finished_tasks.append(task)
            self.running_tasks = still_running_tasks

This code will spawn the maximum number of processes and as soon one of them finishes the next one is spawned.

The full source can be found here

[2016-08-25] Access shell script variables from python

In case you want to read bash variables from python, I have an example for you. The method is not safe, it will execute the shell script, so you have to trust the input! But the nice part is, that even calculated variables are returned.

Here the interesting part:

def load_config(config_filename):
    # to get the same set of env variables, we need to execute also multiple statements in one line
    default_env = check_output_shell("true;set").decode('utf8')

    config_data_list = open(config_filename).read().splitlines()
    config_data_list.append("set")
    # we join the lines with ; so the the BASH_EXECUTION_STRING will not contain newlines
    config_env_list = check_output_shell(";".join(config_data_list)).decode('utf8').splitlines()
    return dict(_get_dict_tuples(l) for l in config_env_list
                if l not in default_env and not l.startswith('BASH_EXECUTION_STRING'))

The key is, to append a set to the shell file and execute it and compare the output to an empty shell script with a set in it. Here is an example with string concatination:

$ cat input/string_concatenation.conf 
foo=bar
foo+=' baba'
$ python read-shell-vars.py input/string_concatenation.conf 
{'foo': 'bar baba'}

And one with a list concatination:

$ cat input/list_concatenation.conf 
lista=(a b c)
listb=(c d e)
listc=("${lista[@]}" "${listb[@]}")
$ python read-shell-vars.py input/list_concatenation.conf 
{'lista': ['a', 'b', 'c'],
 'listb': ['c', 'd', 'e'],
 'listc': ['a', 'b', 'c', 'c', 'd', 'e']}

This will only work for config like shell scripts that will not print to stdout. If you need to support this you need to redirect the output of both set commands to a file and read them.

[2016-08-13] Hello Lektor

I switched my blog from jekyll to lektor. The transition was smooth and things like the tag cloud didn't need a hack in the first place. You can check out the sources at gitlab

[2016-08-05] How to distribute an executable python module without pip

This will only work on linux, Free BSD, and probably Mac OS. Here is a simplified version of the file tree.

$ tree
.
├── boerewors
│   ├── command_line.py
│   ├── errors.py
│   ├── helper.py
│   ├── __init__.py
│   └── warmup.py
├── __init__.py
├── __main__.py
├── setup.py
└── tests
    ├── helpers.py
    ├── __init__.py
    └── test_warmup.py

boerewors is a python module that we want to distribute. In case we want to distribute this package to a server without pip and without root privileges we can run this little shell script:

mkdir -p dist
zip -r dist/tmp.zip boerewors -x \*.pyc
zip dist/tmp.zip __main__.py
echo "#!/usr/bin/env python" > dist/warmup
cat dist/tmp.zip >> dist/warmup
chmod +x dist/warmup
rm dist/tmp.zip

This script zips the python module, creates file with a python shebang appends the zip file to it and marks it executable. The entry point of this executable is __main__.py which contains only this lines:

from boerewors.command_line import main
main()

This file can now be copied to the server where we want to execute it. We are not bound to write our script in one file and even have test for development.