A summary of the first days (from day 0 —Sunday to day 3 —Wednesday), of EuroPython 2017.
It looks like there are a lot of opinions or assumptions about unit tests and code coverage, most of them confusing of biased in several ways. For example, I’ve heard or read things like “this is fine, it has X% coverage”, or “checking for coverage on pull requests doesn’t help”, or “dropping the coverage level is not an issue”, and many more of the like.
This article aims to shed some light on the issue of unit testing with and code coverage. Hopefully by the end of it, we’ll get an idea of which of the previous statements are right and which are wrong (spoiler alert: they’re all wrong, but for different reasons).
Summary of the second day of Pycon CZ 2017.
The day started at 9.00, and at first it was time of check-in, etc. After that,
I solved one riddle by
kiwi.com, and earned a discount in flights, which
was a nice way to start the conference.
Then after breakfast and some networking going through the booths of the sponsors, it was time for the first talk of the day: “When bugs bite - why neglecting edge cases can kill“.
It was a great talk, a case for software engineering in general (it’s not Python-specific, which is what makes the topic more interesting). The best thing I liked about the talk was the remarks, and the idea of shifting the mindset when it comes to development, so we should do some “negative thinking” in the form of what can go wrong with this? How can this fail?, and such. It’s important, because this is critical in order to make robust software. Most problems I have seen on software bugs were related to some optimistic thinking, and even worse, optimistic unit testing, in the way that developers only test for happy-path scenarios, without thinking all sort of things that can go wrong.
Afterwards, was the time for the talk about parallel processing (poor person’s parallel processing), which was fine. Then I spent some time tackling some of the challenges sponsors had available, so I did some coding and recap of the events so far.
Then I listened to a talk about wolfcrypt which is a tool for
crypto in Python. The talk introduced some
crypto concepts (symmetric
crypto, public key, etc.), which was good. Most of the questions revolved
around comparison with other tools in python (default libraries,
Then it was time for lunch and do some more coding, and the next talk I attended to was called “should I mock or should I not?“, which I liked very much and gave some food for thought.
Then it came one more talk, after which it was time for mine, so I presented clean code in Python. There were some interesting questions, and the entire presentation went relatively quickly.
Once the talk sessions were over, there was one last track for lightning talks, which are always super entertaining.
After the finish of the first day, I attended the speaker’s dinner, which was a nice opportunity to network with the community, which is always great.
Looking forward to some more interesting talks tomorrow, and to see the results of the challenges.
Descriptors generally have to interact with attributes of the managed object,
and this is done by inspecting
__dict__ on that object (or calling
getattr/setattr, but the problem is the same), and finding the key under
the specific name.
For this reason, the descriptor will have to know the name of the key to look for, which is related to the name of the attribute is managing.
On previous versions of Python this had to be done explicitly. If we wanted to work around it, there were some more advanced ways to do so. Luckily, after PEP-487 (added in Python 3.6), there are some enhancements regarding class creation, which also affects descriptors.
Let’s review the problem, the previous approaches to tackle it, and the modern way of solving it.
Descriptors are an amazing tool to have in our toolbox, as they come in handy in many opportunities.
Probably the best thing about descriptors, is that they can improve other solutions. Let’s see how we can write better decorators, by using descriptors.