Simple experiment with systemd-networkd and systemd-resolved

In my previous post, I wrote about how simple it was to create containers with systemd-nspawn.

But what if you wanted to expose to the outside network to a container? The rest of the world can’t add mymachines to /etc/nsswitch.conf and expect it to work, right?

And what if you were trying to reduce the installed dependencies in an operating system using systemd?

Enter systemd-networkd and systemd-resolved

Firstly, this Fedora 25 host is a kvm guest so I added a new network interface for “service” were I created the bridge (yes, with nmcli, why not learn it as well on the way?)

nmcli con add type bridge con-name Containers ifname Containers
nmcli con add type ethernet con-name br-slave-1 ifname ens8 master Containers
nmcli con up Containers

Then, in container test, I configured a rule to use DHCP (and left in a modicum of a template for static addresses, no… that’s not my network) and replaced /etc/resolve.conf with a symlink to the file systemd-resolved manages:

cat <<EOF > /etc/systemd/network/

# or swap the above line by the lines below:

rm /etc/resolv.conf
ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Finally, I enabled and started networkd and resolved:

systemctl enable systemd-networkd
systemctl enable systemd-resolved
systemctl start systemd-networkd
systemctl start systemd-resolved

A few seconds later…

-bash-4.3# ip addr list dev host0
2: host0@if29: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
 link/ether 06:14:9c:9e:ac:ca brd ff:ff:ff:ff:ff:ff link-netnsid 0
 inet brd scope global host0
 valid_lft forever preferred_lft forever

-bash-4.3# cat /etc/resolv.conf 
# This file is managed by systemd-resolved(8). Do not edit.
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known DNS servers.
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
# See systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.


Happy hacking!

Simple experiment with systemd-nspawn containers

For this test I used Fedora 25. Your mileage might vary in other operating systems, some things may be the same, some may not be.

WARNING: you’ll need to disable selinux so to me this was merely an interesting experiment and it lead to increasing my knowledge, specially in relation to selinux + containers. Bad mix, no security, containers don’t contain, etc.

Many thanks to the nice people from #fedora and #selinux that graciously lent their time to help me when I was trying to use nspawn with selinux enabled. With their help, specially Grift from #selinux, we were actually able to run it, but only in a way I’m so uncomfortable with that I ultimately considered this experiment to  be a #fail as I’m definitely not going to use them like that any time soon: there’s still a lot of work to do in order to run containers with some security. I hope the Docker infatuation leads to an universal solution towards security + containers from the good engineers at Red Hat and others involved in that work.

But it certainly was a success in terms of contributing to more experience beyond a quickly expiring benefit of familiarity with OpenVZ.

Enough words, here’s how simply it was…

Firstly, let’s setup a template from which we’re going to copy to new instances. As I’m using Fedora 25, I used DNF’s capability to install under a directory:

dnf --releasever=25 \
 --installroot=/var/lib/machines/template-fedora-25 \
 -y install systemd passwd dnf fedora-release \
 iproute less vi procps-ng tcpdump iputils

You’ll only need the first three lines, though, the fourth was just a few more packaged I preferred to have in my template.

Secondly, you’ll probably like to do further customization in your template, so you’ll enter your container just like it was (well, is) an enhanced chroot:

cd /var/lib/machines
systemd-nspawn -D template-fedora-25

Now we have a console, and the sky is the limit for what you can setup, like for instance defining a default pasword for root with passwd (but maybe you’ll not want to do this in a production environment).

For some weird reason, passwd constantly failed manipulating authentication tokens, but I solved it quickly by merely reinstalling passwd (dnf -y reinstall passwd). Meh…

I also ran dnf -y clean all before exiting the container in order to clean up unnecessary space wasted with package meta data that will be expired quickly.

When you’re done customizing, exit the container with ctrl + ]]] in about a second.

Finally, we’re ready to preserve the template:

cd template-fedora-25
tar --selinux --acls --xattrs czvf \
    ../$(basename $( pwd ) )-$(date +%Y%m%d).tar.gz .
cd ..

We’re now ready to create a test container and launch it in the background:

mkdir test
cd test
tar --selinux --acls -xattrs xzvf \
cd ..
machinectl start test

This container will probably not be able to run services exposed outside without help but you can login into its console with machinectl login test

You’ll also have automagic name resolution from your host computer to the containers it runs if you change the hosts entry in /etc/nsswitch.conf placing mymachines between files and dns (or as you see fit if otherwise in your setup):

hosts: files mymachines dns myhostname

If you had enable ssh in your container, you’d be able to do ssh test from the host machine. Or access a web server you installed in it. Who knows.

As you saw, despite a lot of words trying to explain every step of the way, it’s excruciatingly simple.

The next article (Simple experiment with systemd-networkd and systemd-resolved) expands this example with a bridge in the host machine in order to allow your containers to talk directly with the external world.

Happy hacking!

Zabbix Postfix (and Postgrey) templates

Today’s Zabbix templates are for Postfix and Postgrey (but separated in case you don’t use both).

Since I run a moderate volume set of email servers, I could probably have Zabbix request the data and parse the logs all the time, but why not do it in a way that could scale better? (yes, I know I have 3 greps that could be replaced by a single awk call, I just noticed it and will improve it in the future).

I took as base a few other examples and improved a bit upon them resulting in the following:

  1. A cron job selects the last entries of /var/log/maillog since the previous run (uses logtail from package logcheck in EPEL)
  2. Then pflogsumm is run on it as well as other queries gathering info not collected by pflogsumm (in my case, postgrey activity, rbl blocks, size of mail queue)
  3. Then zabbix_send is used to send the data to the monitoring server

The cron job gets the delta t you want to parse the logs, in my case it’s -1 as I’m going it per minute and that’s an argument to find … -mmin and you’d place it like this:

* * * * * /usr/local/bin/ -1

This setup will very likely require some adaptation to your particular environment, but I hope it’s useful to you.

Then you can make a screen combining the graphics from both templates as the following example:

Zabbix Keepalived template

I’m cleaning up some templates I’ve done for Zabbix and publishing them over here. The first one is Keepalived as a load balancer.

This template…

  • requires no scripts placed on the server
  • creates an application, Keepalived
  • collects from the agent:
    • if it is a master
    • if it is an IPv4 router
    • the number of keepalived processes
  • reports on
    • state changes (from master to backup or the reverse) as WARNING
    • backup server that’s neither a router or has keepalived routing as HIGH (your redundancy is impacted)
    • master server that’s neither a router nor has keepalived routing as DISASTER (your service will be impacted if there’s an availability issue in one real server as nothing else will automatically let IPVS know of a different table)

I still haven’t found a good way to report on the cluster other than creating triggers on hosts, though. Any ideas?

Up next is Postfix and, hopefully, IPVS Instance (not sure it can be done without scripts or writing an agent plugin, though. I haven’t done it yet).

#Infarmed tem oportunidade rara com #SoftwareLivre

Boa tarde

Chamo-me Rui Seabra e presido a direção da Associação Nacional para o Software Livre, pelo que gostaria de chamar à atenção do Infarmed este tema.

Li no Tugaleaks que o Infarmed pagou para criar um software que deve funcionar em  iPhones e iPads e em devices Android (telefones e tablets), mas é acusado lá de ter alguns problemas sérios e talvez até fora da lei.

É importante salientar que no que li no caderno de requisitos que consta a obrigação legal de cumprir a Lei das Normas Abertas e que «todo o código produzido no âmbito do presente projeto é propriedade do Infarmed». Um pequeno à parte: isto não é lei da propriedade industrial não há aqui nenhuma invenção, o que se aplica é, sim, a lei do direito de autor e direitos conexos.

O Infarmed tem aqui uma oportunidade rara de sanar os problemas de que é acusado no Tugaleaks transferindo todo este código sob a licença, por exemplo, GNU GPL v3 ou superior para o SVN.GOV.PT, um projeto da Agência para a Modernização Administrativa, e assumindo o dinheiro pago como um investimento na criação de um programa livre que ainda não existia.

Estes dez mil euros poderiam ter sido investidos com o intuito de publicar os códigos fonte desta suite de aplicações e permitir que a sociedade civil participasse em desenvolvimentos futuros, melhorando a aplicação em futuras versões, talvez até sem muito mais investimento do Infarmed (naturalmente alguém deveria fazer algum Q&A das versões futuras que o Infarmed opte por utilizar).

A primeira versão pode não ser suficientemente adequada, mas pode perfeitamente ser melhorada, especialmente se todos pudéssemos ajudar.

Aceitará o Infarmed tal desfio?  Quer a nossa ajuda?

Os melhores cumprimentos,
Rui Seabra

Musings on #Heartbleed

Several thoughts have been on my mind about #heartbleed. You may have heard similar thoughts about it, but I’d like to add my own.

Ah… nothing like checking the news in the morning, feels like… ah… a bug in OpenSSL, let’s check it out… OMFG… By 10:00 I was already applying patches to vulnerable (and exposed) servers all around, processes be damned!

Is Free Software security tarnished?

Absolutely not!

Let me start by the first thing you should take in mind: you’re better off than with proprietary software and this bug proves it, few could have said as well as Sam Tuke of FSF Europe did, there are also a few words from Simon Phipps and Eric Raymond.

In a gist, there are several instances of just as serious bugs, and many much more serious, on proprietary software. Even in the field of network security. And those are just the tip of the iceberg, those that were guessed and not found.

This bug had patches available within few hours of being published available to those affected.

Several documented flaw finding studies have been made, guess who turned out better in every single one of them in average? Yes, Free Software. Proprietary software has constantly been found to have, in average, more bugs, more security bugs, more delayed patch releases, etc..

Update (2014/04/13): Also, an even such as this one prompted an independent audit review from the OpenBSD people, here’s another bug in OpenSSL that has been fixed there, proving once again how Free Software works to make software more secure:

  1. you can do independent and public audit reviews
  2. you can push fixes for what you found publicly on the Internet
  3. anyone can take advantage of those changes thus maximizing the effect

Now imagine such a bug happened in Microsoft’s crypto…

  1. you can’t do independent audit reviews
  2. you can’t push fixes for what you found publicly on the Internet
  3. nobody but Microsoft can make a fixes Microsoft crypto library

Replace Microsoft by whomever you prefer above, they’re just an easy target. 😉


Here’s the most detailed timeline of public information on the bug that I found.

Yes, the code was there for about two years, but the exposure was not that big. It was big, about a fifth of the “secure” web. Unfortunately, lots of very popular websites were exposed, so the general recommendation is: don’t assume they’re safe, change your passwords everywhere.

Why wasn’t it bigger? Because not everyone runs the latest releases, lots of GNU/Linux distributions have more conservative approaches to running recent software. Take in point Red Hat Enterprise Linux and it’s derivate distributions.

Only since the 6.5 release, released in late November last year, did updated Red Hat (and derivatives) installations become exposed. CentOS followed a about a couple of weeks later.

Ironically, this bug affected the most efficient system administrators who had kept their systems updated 🙂

But many run their services in, for instance, Red Hat Enterprise Linux 5 (and derivatives) which is completely unaffected by this bug. Same for other software.

Even those who run the major 6.5 release could be totally unaffected, if they used NSS instead of OpenSSL with Apache, for instance.

In short: it was big, but not catastrophically big.

Also affects proprietary software!

What? How could this be? Isn’t OpenSSL Free Software? Well, yes, yes it is, but it is licensed in such a way that permits proprietary derivative versions.

They should be safer, right? Hi Cisco and Juniper… I’m sure there are others. I wonder if they’ll be at least honest enough with us… I urge people to check their ultra-expensive and highly proprietary  Web Application Firewalls, Load Balancers, Proxies… etc…

All your keys are belong to US!

9 out of 10 SSL certificates are under indirect control of the US Government. Think Patriot, NDAA, National Security Letters, Secret Courts with Secret Interpretations, people and companies coerced under threat of being formally accused of treason if they don’t cooperate or if they talk about it.

Even if #heartbleed can really lead vulnerable  software to leak the private keys, you should renew your certificates under a non-american CA.

Really, don’t make it easy for them, they don’t deserve that, your customers don’t deserve that, your friends and family don’t deserve that.

Change management be damned!

If you ever have an axe to grind about ISO 20000, ITIL or similar brain dead efficiency killers, specially when implemented by complete and utter idiots, now you can have some revenge.

It is a bug of such seriousness that I recommend to screw the change management processes. Update now if you are affected or change your career because you either are managed by complete and utter idiots or you don’t take it seriously enough.

Places that have enough good sense will allow you to run the Emergency Change process by your ECAB after the fact for such serious situations.

Take advantage of that, this is such a case.

Conspiracy theories

Unlike some suggested, it appears to be an honest mistake that neither the developer nor his reviewers did spot, and they felt quite embarrassed:

The author of the bug, Robin Seggelmann,[78] stated that he “missed validating a variable containing a length” and denied any intention to submit a flawed implementation.[79]

Theo de Raadt, OpenBSD’s founder, said «OpenSSL is not developed by a responsible team», but I doubt they’ll bother implementing a new SSL library. I wonder what they’ll do though… but are likely making an independent review.

Prophecy come true!

Poul Henning-Kamp’s hilarious ending keynote of FOSDEM 2014 pretending to be an NSA agent speaking of Operation Orchestra, calling it a crown jewel:

  • Crown jewel: OpenSSL
  • Go-to library for crypto services
  • API is a nightmare
  • Documentation is deficient and misleading
  • Defaults are deceptive

We need to ask him where he has found spice… he certainly seemed like he had blue eyes and #Heartbleed was truly a Crow Jewel for…

…The NSA

No such agency had such a duty to find a serious bug like this one and responsibly proceed to get it fixed ASAP as it was affecting its nation like the National Security Agency had.

There are innuendos that the NSA knew about heartbleed for a long time. They certainly have the expertise and the budget to have found it, but they did deny any knowledge or exploit for years, in fact that they didn’t know about it before April.

Of course, no one can trust the NSA anymore because they have been proven invested into breaking security for everyone, so they could be lying in order to cover their asses after such a monumental fail in protecting their own country’s security.

Or they could have just been doing it for less than two years, like one year and 364 days, not yet years (plural), right?

One never can tell, and that’s symptomatic of a very botched organization.

Is Microsoft involved?

I don’t know. It’s certainly fishy that:

  • The publication date coincides with the death of Windows XP. It could be called a distraction manouver, so that people get scared of moving away from Windows XP into a GNU/Linux… it certainly has been effective at crying wolf in big media outlets
  • Codenomicon is run by a Microsoftie, well, ex Chief Security Officer of Microsoft, but those kinds of people tend to leave the companies with strong lobbying and partnership relations in their next ventures with the big mothership
  • It is documented that Microsoft has been a faithful collaborator of the NSA for many years, even to the point of maybe having a dedicated backdoor

Maybe it’s just coincidence. Maybe

OpenSSL is grossly unappreciated

They few OpenSSL developers are highly dedicated people that don’t exactly live well off of it. In fact, the importance of OpenSSL is disproportionately unappreciated, specially in a financially rewarding form.

Fortunately, the devlopers do it more out of other rewarding factors, like responsibility and pride.


  • It’s one of the most serious security bugs in the history of the Internet
  • Use any of the available mitigations if you’re affected (upgrade, recompile disabling the feature, downgrade, change software)
  • More people (specially corporate companies making money with OpenSSL) should donate mone to the OpenSSL Software Foundation
  • I don’t remember writing such a long post, it’s probably very flawed, I accept patches, comment below 😉

Free Software and Security under the #NSA

Anyone claiming Free Software “does not magically make things more secure – never has, never will” without explaining how you’re so much better off at securing yourself is using truths to lie to you.

Here’s an example:

Explicit truth: it doesn’t “magically make things more secure
Hidden truth: it technically and scientifically does by exposure to peer review and the scientific method, the end results have definitely been proved more secure in average than the proprietary “alternatives”
Hidden lie: “never has, never will” It’s just piggy backing on the explicit truth in order to hide (using a true statement) that in average it does and that you’re better off.

So, if someone is lying to you so straight faced, how can you trust that person when he’s been claiming badBIOS is a myth?

The fact is it is possible, it’s installed code running on chips and it can be updated. Didn’t he himself just say that all software has security bugs when he told that being Free Software doesn’t “magically make things more secure“?

So why couldn’t these computers be compromised in such ways? In fact the NSA backdoor catalogue explicitly details BIOS level security compromises and implants! Go read this list, specially the BIOS level attacks then think for yourself upon badBIOS rather than trust people who tell you “no, that’s not it” or “just conspiracy theories”.

Those people are lying to you and they have hired a lot of security people under their wing, so of course they’d use these hired high tech spooks in order to try to discredit you…

So go watch Jacob Applebaum’s talk at 30C3, To protect and infect, part 2, rather than believing someone calling him a conspiracy theorist.

He’s publishing these findings at a respectable newspaper (Der Spiegel), the other guy is just name calling.

Which one deserves more credit? You decide.

Me, I’ll be trusting Free Software security, if anything, these NSA scandals have proven my reason, and sure they could try to insert backdoors in Free Software, but tell me, how easily can you put a backdoor where anyone can see?

Not. Easily. Not at all.

What about when most people are blinded except from the builders?


Here’s an example, from Jacob’s talk: Jake tells about those little USB dongles that randomly move your mouse in order to prevent the screensaver from launching… you know what Systemd now does when it finds one? Automatically locks the screen. What do Windows or MacOS do?

Riiight… you guessed it, move the mouse and prevent the screensaver from launching.

I’ll be using Free Software and so should you, but you’re your own boss.

You can choose a greater likelihood of being infected.

Fixing the FirefoxOS default search provider

WTF? FirefoxOS uses Bing as default search provider?

Yuck, I must fix that. Finally after almost a week I have had some time for me and that’s the first thing I must fix. Thanks to the very helpful comment from Mathieu in this blog post, I wrote a very small shell script that replaces the search provider to your favorite one, defaulting to Google.

It uses sudo for the privileged commands but doesn’t need to completely run as root. Its your choice, run ./ or sudo ./ taking as optional arguments (in this order), the Title, the URL and it’s Favicon url. Miss one and the Google default will be used 🙂 eg:

./ \
  "Duck Duck Go"

First, it sets the options, then starts adb-server and fetches the browser application (note that you need the android-tools package in Fedora or the Android SDK):



sudo adb start-server

sudo adb pull \
sudo chown $USER:$GROUP

mkdir application
cd application/
unzip ../

Then, it does the magic with the help of Perl:

perl -pi -e " \
    s|(DEFAULT_SEARCH_PROVIDER_URL: ?)'.*?'|\$1'$URL'|; \
    s|(DEFAULT_SEARCH_PROVIDER_ICON: ?)'.*?'|\$1'$URL'|; \
    " js/browser.js gaia_build_defer_index.js

Finally, it rebuilds the browser application and pushes it back to the phone and reboots it (it shouldn’t need a reboot, maybe there’s a way to avoid this?):

zip -fr ../ .
cd ..

sudo adb remount
sudo adb push \
sudo adb reboot

So, here it is, finally fixed 🙂 Well, temporarily (it will be better fixed following the results of and or something similar) at least.

Screenshot of FirefoxOS browser googling for 'test'
Googling for ‘test’

To the extent possible under law, Rui Miguel Silva Seabra has waived all copyright and related or neighboring rights to Fix FirefoxOS Default Search Provider. This work is published from Portugal.

Why I’m not buying the Raspberry Pi

Some people will claim it’s “pragmatic” to accept proprietary firmware in order to get your fabulous 3D and 2D acceleration out of your graphics card.

So what has been the result of accepting that so called “pragmatism”?

Now you can’t even boot a computer because they put the 1st stage bootloader in the GPU firmware…

Is there a GPU binary?
Yes. The GPU binary also contains the first stage bootloader.

This is fishy, and it may be a first attempt at trying to circunvent software freedom, after all, if you accepted the firmware for the graphics card, surely you’ll accept the firmware to boot.

Sure you can probably boot any OS from the SD card, but it may very well be running under an hypervisor… you may be in your own little VM and someday, that might bite you when you least expect it.

Maybe this doomsday scenario is not happenening with the Raspberry Pi, but it seems, to me, like the proverbial slippery slope in action. You ceded, now you have to cede a bit more, and when you notice, you can’t go back up again.

This is why I won’t buy a Raspberry Pi.

This so called “pragmatism” has resulted in an even worse situation than before and I won’t support that, no matter how cheap the device is.

Twitter is wrong: should not drop httpS basic auth

As some of you might know, I write a µ-blogging tool called elmdentica. It is a client side application developed with Elementary, an EFL library oriented towards small touchscreen interfaces. I only recently learned that Twitter is dropping Basic Authentication support coming next June 30th. They claim it’s insecure because:

  1. with http credentials go in the clear (no problem here)
  2. with https, some people may think it’s too expensive (only complete idiots)
  3. applications have to store user credentials locally

As an alternative, they are making oauth mandatory for APIs that need authentication. While their reasoning may make sense in the context of massively concentrated web applications (think Twitpic and similars) this is absurd for client application like those running in your cell phones or computers.

Let’s take a look at the problem…

oauth gives you a consumer key and a consumer secret that authenticate your application. They don’t authenticate the user, they prove Twitter that you’re a legitimate and registered application.

If both key and secret became public, anyone could make an application pretending to be yours. While someone making a clone of your program isn’t a real problem, if someone writes a trojan horse… then there could be a problem, no?

Well, with oauth, both key and secret need to be known by the application during run time. So at any given moment, the computer running your application will have these two important assets. Either because they are embedded in your code, or because you download them live from a site. The fact remains: they are for all practical effects no longer secrets.

In web applications, no user accesses the only running copy of the software holding both key and secret, so oauth works there.

What about xauth?

I haven’t read much about xauth but after reading this page explaining what xauth is, I’m absolutely convinced the problem remains and wasn’t even tackled. The only issue that was solved, by requesting an user’s login and password only once, without need of local storage or visiting a web page, was an usability issue for client applications.

The real problem is still there, so Twitter is wrong and should not drop Basic Authentication from the https interface.

If they do, elmdentica will very likely not work on Twitter anymore. I don’t care much about that, but the users of elmdentica may care. That pisses me off.

What now?

Fortunately, there is a better alternative to Twitter if you value software freedom called More than just using, you can have your own “Twitter” by installing the Free Software that makes, which is StatusNet.

At least they have no plans of dropping Basic Authentication. Hurra!