A View on FOSDEM 2020

Another year, another FOSDEM edition. As always, since this conference grew so big (fact: if you tried to watch all videos in a row, it’ll take you about 9 weeks!), chances are, every review you read from the conference will contain something different, and therefore, complementary.

This is what I was able to ex­pe­ri­ence. Let’s take a look.

A recurrent theme in FOSDEM seems to be the high concurrency. There were lots of people attending, which made it difficult to make it into some dev-rooms, as they were overcrowded. In addition, some very popular dev-rooms got regular-size rooms where not enough people could fit (for example the PostgreSQL one, as opposed to last year). Because of this, I missed quite a few opportunities.

How­ev­er, an­oth­er trait of the con­fer­ence is not on­ly the high con­cur­ren­cy, but al­so the high qual­i­ty of the talk­s. There­fore, fall­ing back to some any oth­er talk end­ed up on me learn­ing about some cool top­ic, with the added el­e­ment of sur­prise.

The Talks

On Saturday, I started the morning in the free Java dev-room, and the first talk I watched was Tornado VM: A Java VM for heterogeneous hardware. It introduced the idea of having a VM that takes advantage of different hardware (not just CPU, but also GPUs, and FPGAs as well). Though it was Java-focused, it did mention that the concepts are applicable to other languages as well.

The fol­lowed a talk about Byte­Buffers. A re­al­ly nice pre­sen­ta­tion of the new mem­o­ry man­age­ment API (com­ing up to Ja­va 14). It pre­sent­ed the ra­tio­nale, com­mon per­for­mance is­sues, the goals of ac­cess­ing mem­o­ry on and off the heap, and such.

Af­ter­ward­s, I went to the The Hid­den Ear­ly His­to­ry of Unix.

One of the highlights of the conference was Fixing the Kubernetes clusterfuck. An amazing talk (I highly recommend you watch the video), with a live demonstration of how to hack (and detect) a Kubernetes cluster. It started with a very good introduction to the falco project (how it’s built, how it works, how it integrates with another tools, and its capabilities). It’s a project with interesting features (like for instance the fact that uses eBPF makes it have a minimal overhead).

The next three talks con­tin­ued with the se­cu­ri­ty theme. The first one of them al­so about con­tain­er­s: Us­ing SELin­ux with con­tain­er run­times, The hairy is­sue of e2e en­cryp­tion in in­stant mes­sag­ing, and What you most like­ly did not know about su­do.

And that closed up the first day.

On Sunday, I started by attending two talks about monitoring and observability. On Distributed tracing for beginners we saw a live demo of applying tracing to a Java application, from the ground up, and seeing the metrics with Jaeger. Then came a talk about Grafana: successfully correlate metics, logs, and traces which was a very good continuation. It was also interesting to learn about upcoming features to Grafana (such as linking to traces from the metrics graphs directly, and more integrations).

Afterwards, I attended another talk about SWIM - Protocol to build a cluster, and on the same room came the talk about Implementing protections against Speculative Execution side channel: a really technical and well-presented talk explaining low-level security implications of side channel attacks, and some recommendations on how to avoid some of those issues. The talk introduced the MDS / TAA threat models, and their implications. There were also really good questions asked at the end, that provided very interesting food for thought.

On the evening, I was able to finally make it into the PostgreSQL dev-room, and it was really worth it. The first talk was about The state of full-text search on PostgreSQL 12. It properly explained some of the internals that go on, when we try to use this feature, and some caveats to avoid. It had a really nice introduction to information retrieval, and how it’s implemented in PostgreSQL.

Fi­nal­ly, RTFM (don’t be mis­led —as I was—, by the ti­tle), pre­sent­ed four case stud­ies on which things went south, and why. The learn­ings on all cas­es, pro­vid­ed valu­able in­sights on how to make a bet­ter use of our re­la­tion­al data­base.

Then came the clos­ing talk, cel­e­brat­ing the 20 years of the con­fer­ence.

All in al­l, an­oth­er good edi­tion of the Eu­ro­pean con­fer­ence for open source. There’s still lots of ma­te­ri­al that I would like to go over in more de­tail, and some missed talks that I have to catch up on, but it was a good ex­pe­ri­ence.