WebCamp 2017

Posted by Ivan Ačkar Friday, Oct 27, 2017
WebCamp 2017 crowd

This year's WebCamp arrived at the same format as before, but with a twist – the opening day was set on Friday, a workday, so you needed to put some extra effort in attending it, which actually resulted in a crowd! The organisers have really made an effort so everything was top-notch, up to widely offered mandarins next to cornucopia of coffee and other beverages. However, there was some weird pervasive feeling of lazy Sunday during the next day (Saturday, yes!), most probably due to lower attendance and sunny October skies. That, of course, did not stop any talks from being held so those were two quite interesting days, with quite a spectrum of topics discussed. I’ve decided to reflect on some of the quality talks; there were many of them, but there’s only so much that can be covered.

2nd hardest thing in computer science

An interesting talk held by Paweł Lewtak was about proper naming of things in code. Or at least that's at the root of it, while in reality, as well as in this talk, it goes much more beyond the mere idea of having something named properly (sounds somewhat redundant and self-explanatory, doesn't it?). However, due to various tools and people/programmers involved this topic won't cease to exist any time soon.

As an intro to the entire talk, Paweł reiterated something that seems to be, again, somewhat obvious – you code for people. You code for your future self. The only entities reading your code will be other fellow humans, i.e. programmers, not computers nor other machines (at least not yet), therefore the idea is to clarify your code as much as possible, and reduce the time and future confusion simply by having it decently written. Even if written by yourself, once you have moved to another project and into a completely different ecosystem of code you will not have all the details and complexities of the former project laid out before your eyes. The idea to write code in a clear and concise way is therefore not optional but mandatory.

Paweł’s talk was filled with examples, and he started by the most trivial one, but nonetheless still relevant and often found in nature: what is the most common variable name? “data”. What is the second most common variable name? “data2”. One might think that there are situations when this is justified, especially at the time of writing (“well of course it’s incoming data, what else would it be?!”), but even if that’s the case, and even if it is really contextually decipherable, this is still code. Code is rarely static, it is modified, extended, copied and pasted, so the information about the context gets lost rather soon, just when you hit that line 80 of your method and decide to name the next variable data2 because, seriously, you have some data again. How unexpected. And what about the colleague of yours who checked this out and decided he might use the code in his other project, so he copied it?


The talk about variable names doesn’t stop at variables, the discussion is pertinent to practically all names used throughout the code, and goes beyond the code, finally extending into the realm of terms such as self-documenting code, domain driven design, and design in a broader sense.

When writing code and laying out the architecture of a project, or a subproject, the first thing indeed to pay attention to is the naming scheme of variables and data types. Variable instances should have a clear name from which you can infer meaning and function, and not rely just on the context, so retrieved names instead of data. The classes should be named in a meaningful manner and be of limited scope when possible, so you can use handler/processor/analyser/migrator etc. instead of the dreaded manager. The method/functions names should also describe what they’re doing without the need to refer to a comment (and praying that there is one), and should only do what they’re stating in the name that they’re doing. There shouldn’t be any complex calculation in a getter-method, it should be plain retrieval of data. If a method is potentially creating a (member) object or altering the state of the object, then it is not a decent getter and shouldn’t be named getXyz. If a method is stating that its sole task is to search through an array, don’t implement anything else within that method, use a different one for that.

Be consistent with names in singular and plural form, don’t return multiple objects if you’re stating that you’re returning a single one. Don’t meticulously comment every section of the code, where needed and where not needed, instead break your code into smaller modular pieces that are self-explanatory, but do comment the more complex parts of the code on a higher level; everybody is capable of reading loops and conditional statements, but it is extremely helpful if the intention of the section, method or a class is brought up in advance. Have code reviews!

Finally, help your colleagues and yourself by using meaningful commit messages; we know that you know what has transpired in “CR PROJECT-21782: Implemented unit tests”, but neither is that helpful to anyone else nor will you yourself remember what was that all about in just a month and a half. These might be really all obvious practices, but still they’re not that common as one would want and expect. Implementing them is not problematic at all, however does require a degree of conscience and vigilance; it will always be easier to name the variable data, so only constant attention can help these principles to become a standard.

Software is making us stupid

Source: official WebCamp Facebook profile

A talk held by Goran Peuc ponders upon the old idea that technology is slowly but surely destroying our civilisation as we know it. Goran posits that human brain develops by constant dealing with everyday problems, as well as specific ones. Handling problems and devising solutions results in certain skills, and the more you deal with a certain problem utilising some skill, thus bettering it (and in turn yourself), the more that influences your general behaviour. A set of certain traits and behaviours makes a personality, so effectively you are your problems and the way you deal with them. (Scary, I know.)

How does software fit into all of this? Well, humans have been developing and evolving different technologies for quite a while now, and at every such a leap in technological advancement there was some trade-off involved – humans would lose certain ability but would usually gain the personal space to deal with other set of problems, thus developing some completely new skills and behaviours.

Developing agriculture allowed humans not to move constantly, developing writing systems allowed for stories to be written and knowledge to persist into future generations, developing the steam machine changed the way of travel and reduced dependency on other forms of travel, developing the washing machines allowed for much more free time spent on other things, instead of often cleaning of clothes, developing agricultural machines allowed for easier production of food, thus reducing the time and exhaustion spent on mere attempt of providing food for oneself, etc., let alone the 20th century. We may not have the skills to ride a horse any longer, but we can utilise the time in transit to listen to a book or think about the paint for that room in our new flat.

However, the majority of these technologies and inventions directly tried to ease up some of the mundane, often even hard tasks of everyday life, freeing the person to deal with other tasks that relate to life itself, not just maintaining the basis of one. Most of these technologies paved the way for new skills, new jobs, new planes of existence. Enter software revolution. With the computational power rose the software complexity, as well as the features it offered, up to the point when computers became powerful, shiny and able to fulfil almost any task – there was a program, or at least several programs, to solve your problem at hand. At this point software still wasn’t the purpose of itself, and the software industry was still making applications and inventions that actually helped humanity in everyday life, as well with developing new skills, jobs and entire fields of science.

What happened somewhere near the beginning of the millennium was the invention of a smartphone, which launched again an entirely new gold fever, except this time people started developing useful and, more often, useless applications. The smartphone artefact was shiny and everybody wanted one, so everybody soon had at least one, and the demand for developers rose sky high. Social networks emerged somewhere at the same time, so these two in combo resulted in a huge number of people being constantly connected, online, hooked to their apps and their social networks.

What started as a simple idea is now influencing almost every waking second of majority’s lives. Goran here states something that is already known, but little can be done – these apps and networks are not dealing with freeing people of their mundane and difficult tasks, now they are essentially robbing them of their time; influencing the way they talk, think and read. Reducing attention span. Creating need for instant yet constant gratification. Notifications everywhere. Applications designed in an extremely simplifying manner, reducing the need to actually try around using a tool, while the marketing and software industry are working hand in hand on creating even more addictive and deliberate ways to keep the user hooked. Software has become its own purpose (just look at the number of idiotic startups emerging from the Valley), and human well-being fell behind.

How to resist the momentum? People in general should be aware of the danger that their technology might bring, as well as the positives that go along; only an educated person can make an educated decision that is best for him/her. Application defaults should be minimally invasive on user’s privacy and time, so designers and programmers should work from the very beginning on not keeping the user constantly hooked and notified just about everything, but actually allow him to decide on the app’s best use. Managers and marketing should devise different approaches on applications business models, and not just crave the raw numbers that can be victoriously delivered to stakeholders. All of us in this chain of jobs and events have a certain responsibility, and all of us should be conscientious about our app’s impact on the user.

comments powered by Disqus

news / events / blogs