lxndryng - a blog by a millennial with a job in IT

Linux on a Gigabyte Aero 14

Dec 16, 2018

If this post is useful to you, I'd greatly appreciate you giving me a tip over at PayPal or giving DigitalOcean's hosting services a try - you'll get 10USD's worth of credit for nothing

Another 'new' laptop, another set of Nvidia Optimus graphics issues to mess around with, all for the sake of a GTX1060 in a laptop that's probably a little too heavy to be an every day carry. This laptop has the additional element of fun of the touchpad not working by default without a kernel option being passed at boot.

So, to reolve the keypad issue i8042.kbdreset=1 needs to be passed to the kernel at boot, and to resolve the issue of the Intel graphics card causing a hard lock on boot, the options acpi_osi=! acpi_osi="Windows 2009" need to be passed.

This assumes that you're using Nouveau and not the proprietary Nvidia drivers. I have no will to do this, so any advice I'd give would be worthless.

Building a Docker Container for Taiga

Mar 11, 2017

If this post is useful to you, I'd greatly appreciate you giving me a tip over at PayPal or giving DigitalOcean's hosting services a try - you'll get 10USD's worth of credit for nothing

It's no secret to people who know me that I am not the most organised person in the world when it comes to my personal life: far too often, things can wait until... well, until I forget about them. As part of a general bit to be more proactive about the things I want to get done in my free time, I had a look at the open-source market for open-source project management software (the people who use Jira at work extensively always seem to be the most organised, but I'm not paying for this experiment to look into my own sloth) and came out wanting to give Taiga a try, with it being a Python application that I'd be able to extend with a minimum of effort if there was some piece of obscura I'd wish to contribute to it. Of course, my compunction towards self-hosting all of the web-based tools I use meant that the second half of the question would be to find a means by which I could easily deploy, upgrade and manage it.

Enter Docker. I'd initially found some Docker images on Docker Hub that worked and in a jovial fit of inattentive, proceeded to use them without quite realising how old they were. Eventually, I noticed that they were last built nineteen months ago, for a project that has a fairly rapid release cadence. Fortunately, the creator of those images had published their Dockerfiles and configuration on GitHub; unfortunately, however, that configuration was itself out of date given recent changes in the supporting libraries for Taiga. The option of looking for other people's Docker containers, of course, did not occur to me, so I endeavoured to update and expand upon the work that had been done previously.

Taiga's architecture

Taiga consists of a frontend application written in Angular.js (I'm not a frontend person - I couldn't tell you if it was Angular 1 or Angular 2) and a backend application based on the Django framework. The database is a PostgreSQL database, nothing really fancy about it.

A half-done transformation

Looking at the code used to generate the Docker images, I noticed that there was a discrepancy between several of the paths used in building the interface between the frontend and backend applications: in the backend application, everything seemed to point towards /usr/src/app/taiga-back/, whereas in the frontend application, references were made to /taiga. This dated from the backend application being built around the python base image, before being changed to python-onbuild. The -onbuild variety of the image gives some convenience methods around running pip install -r requirements.txt without manual intervention, which I can see as a worthwhile bit of effort in terms of making future updates to the image easier. Unfortunately, it does change the path of your application: something that hadn't been fixed up to now. Fortunately, a trival change of the frontend paths to /usr/src/app/taiga-back solved the issue,

Le temps detruit tout

Some time between the last time the previous author pushed his git repository to GitHub and now, the version of Django used by Taiga changed, introducing some breaking module name changes. The Postgres database backend module changed from transaction_hooks.backends.postgresql to django.db.backends.postgresql, with the new value having to be declared in the settings file that was to be injected into the backend container.

Doing something sensible about data

Taiga allows users to upload files to support the user stories and features catalogued within the tool, putting these files in a subdirectory of the backend application's working directory. Now, if we're to take our containers to be immutable and replacable, this just won't do: the deletion of the container would result in the deletion of all data therein. Given that the Postgres container was set up to store its data on the filesystem of the host, outside of the container, it's a little odd that the backend application didn't have the same consideration taken into account. By declaring the media and static directories within the application to be VOLUMEs in the Dockerfile resolved this issue.

Don't make assumptions about how this will be deployed

In the original repository, the ports and where HTTPS was being used for communication between the front and backend had been hard-coded into the JSON configuration for the frontend application: it was HTTP (rather than HTTPS) on port 8000. Now, if one was to deploy this onto a device running SELinux was the default policy, setting up a reverse proxy to terminate SSL would have been impossible because of the expectation that port 8000 would only be used by soundd - with anything else trying to bind to that port being told that it can't. To remedy this, I made the port aprotocol being used configurable from environment variables at the time of container instantiation.


The repository put together previously contained, as well as the Dockerfiles for generation of the images, scripts to deploy the images together and have the application work. It did not, however, have any cconsideration how an upgrade could work. With that in mind, I put together a script that would pull the latest versions of the images I'd put together, tear down the existing containers, stand up new ones and run any necessary database migrations. Nothing more complex than the below:


if [[ -z "$API_NAME" ]]; then

if [[ -z "$API_PORT" ]]; then

if [[ -z "$API_PROTOCOL" ]]; then

docker pull lxndryng/taiga-back
docker pull lxndryng/taiga-front
docker stop taiga-back taiga-front
docker rm taiga-back taiga-front
docker run -d --name taiga-back  -p -e API_NAME=$API_NAME  -v /data/taiga-media:/usr/src/app/taiga-back/media --link postgres:postgres lxndryng/taiga-back
docker run -d --name taiga-front -p -e API_NAME=$API_NAME -e API_PORT=$API_PORT -e API_PROTOCOL=$API_PROTOCOL --link taiga-back:taiga-back --volumes-from taiga-back lxndryng/taiga-front
docker run -it --rm -e API_NAME=$API_NAME --link postgres:postgres lxndryng/taiga-back /bin/bash -c "cd /usr/src/app/taiga-back; python manage.py migrate --noinput; python manage.py compilemessages; python manage.py collectstatic --noinput"

GitHub repository

The Docker configuration for my spin on the Taiga Docker images can be found here.

ASUS Zenbook Pro UX501VW configuration for Linux

Feb 13, 2017

If this post is useful to you, I'd greatly appreciate you giving me a tip over at PayPal or giving DigitalOcean's hosting services a try - you'll get 10USD's worth of credit for nothing

Never trust laptop OEMs if you want to run Linux on a laptop. Well, maybe the more sensible option is to buy laptops from vendors who explicitly support Linux on their hardware (the Dell XPS and Precision lines are supposed to be good for this, as well as the incomparable System76. All of this said, I own an ASUS Zenbook UX501VW and it is a good machine, just a little tempremental when it comes to running Linux, expecially compared to my Lenovo Thinkpad X1 Carbon. Hopefully the following misery I went through will be of use to someone else with this laptop.

Graphics issues

Most people, upon booting any graphical live CD/USB will be greeted with the spinning up of their laptop's fans followed by a hard lock up. Probably surprising to no one, this is an issue with the Nvidia switchable graphics: some ACPI nonsense occurs if the laptop is started with the Nvidia card powered down. There are two options for getting around this:

1. Disabling the Nvidia card's modesetting altogether

To do this, you need to set the kernel option of nouveau.modeset=0. The card will then not have modesetting enabled and therefore will not cause an issue once X loads.

2. Making it seem like you're running an unsupported version of Windows

This is witchcraft and I make no claims to understand how it works, but setting the kernel option acpi_osi=! acpi_osi="Windows 2009" stops the issue that causes the X lockups that occur usually.

Backlight keyboard keys

To enable the keyboard buttons for brightness adjustment to work (and brightness adjustment at all in some cases), the following kernel options need to be specified.

acpi_osi= acpi_backlight=native

These options aren't compatible with the second option above, so pick being able to do CUDA development on a laptop (come on, now) or being able to change the brightness. It was an easy enough choice for me.

Touchpad issues

This is a matter of luck: some of the models designated UX501VW has a Synaptics touchpad and they will work brilliantly out of the box. If you're a little more unfortunate, you have a FocalTech touchpad - a touchpad that only this and a couple of other ASUS devices have. A quick way to tell is to test two-finger scrolling: if it works, you have a Synaptics touchpad - enjoy your scrolling. If it doesn't, you probably have the FocalTech.

There is, however, a DKMS driver available for this touchpad which is targeting inclusion in the mainline kernel. It might take a while to get there, but it will be support by default soon enough. In the interim, cloning the git repository linked above and making sure you have the prerequisites installed (apt-get install build-essential dkms for Debian/Ubuntu-based systems) and running ./dkms-add.sh from within the directory should be enough to get you going.

Every time your kernel updates, you'll need to re-run the ./dkms-add.sh.