WebCamp 2016, Another Take

Posted by Ivan Ačkar Wednesday, Nov 16, 2016
source: WebCamp Zagreb 2016 official web page (screenshot)

This year again a lot of our developers and designers attended the annual WebCamp Zagreb conference. The conference was sold out the day before the opening, and unlike before the attendance was exceptional. Luckily, the organization was superb so enough coffee, tea, and even fruit, was just around the corner, while at the same time you could’ve been persuaded by the sponsors to make a (dramatic?) shift in your career at any moment. All in all, I’ve compiled a couple of opinions on some of the talks, which remained to be quite diverse in topics, but less focused on front-end web technologies; WebCamp might even rename itself some day!

Se7en deadly deployment sins

One of the more interesting and fun talks was held by Philip Krenn from Elastic; Philip decided to enumerate the seven deadly sins – in his talk transposed to se7en deployment sins, that may or may not manifest in a deadly fashion – and share his experiences about general software development, deployment, and maintenance. Pride, envy, wrath, gluttony, lust, sloth, greed – refer to your nearest version of Bible for advice on prevention and potential salvation – exist as well in the development world, yet manifest themselves in crashed applications, untested software, unlogged errors, unproven (i.e. hipster) software that offers many promises but doesn’t really perform when faced with real-world usage, and so on. Truly, all of these deployment sins, and related advice given, can be summarized simply as best practices regarding every stage in the modern software development life cycle.

For example, gluttony in software world is viewed as general wasting of resources; you might have a lot of resources, but you cannot know every possible place and instance where your software will be applied. Therefore – thou shall not waste memory, thou shall not waste disk space, thou shall not be negligent with the dependencies! You might have an SSD and 3 GB of RAM needed solely by your app to run decently in this space-time continuum, but there’s a fair chance that a lot of people don’t – think about your architecture there, and your Java performance. Do you really need microservices? CQRS architecture? Distributed computing? Containers? Or how about that application server that requires an animal sacrifice every time you want to redeploy? Instead, are you paying enough attention to logging your application’s behavior consistently? Can you easily determine the problem, and its source? Can you easily reconfigure your application without coding anew, or tedious redeployment? Do you pay enough attention to automated testing? Continuous integration? Deployment, anyone? Really, you should. Again, a great set of warnings, and best practices.

When a bigger boat is not an option!

A talk held by Mario Kostelac, an emissary of one of the main WebCamp’s sponsors, Intercom, was a bit confusing. It started out promising, where it was depicted how they have huge data volumes that they need to handle regularly and with ease – everything their users crunch through their application is stored, and needed for extremely long time periods – so, of course, after some time handling of such data becomes a bottleneck, especially if you’re dealing with a (poorly designed) relational database. The thing is, their tables within the database (MySQL in this case, but not really that important), have grown over time to more than two billion of records. A fair assumption is that these will only continue to grow, given the constraints (and requirements). Of course, whatever the design of your (relational) database is, at such a large data volume your working set probably won’t fit into the available RAM anymore, and various performance degradations arise. However, at such a large data volume your database (and system design) actually matters – some designs will allow their applications to survive, and some will gladly kill them. Schemas matter!

However, things took a turn here; while investigating the possible solutions, and a naive listener might suspect that they have a really extraordinary design at hand because, you know, a bigger boat is not an option, the company took a bigger ship to compensate. Clever indeed. Their ship is called Aurora, and while Aurora is really an awesome product in itself because it handles the (MySQL) database for you, and by doing so allows you a larger throughput, and again thus scales better, isn’t this just delaying the inevitable consequences of poor system design, whose gravity will ultimately crush the application under the weight of so much data? Mario stated that what they really want to do is coding, instead of dealing with databases, but I find it hard to imagine that you can decouple the design and handling of the entire system from the coding itself; indeed, you may not need a DBA-wizard that practices that dark SQL-magic, but extraordinary needs and circumstances require extraordinary measures – if you’re anticipating billions of rows in your database, you will most certainly have to pay attention to horizontal and vertical partitioning, database sharding, you will probably have to use several database technologies for smart caching and data separation, even the dreaded NoSQL ones, and you will most certainly have to spend some time tuning your queries and indexes. Considering that they often have alter-table queries that, as it was said, last for ten days and force them to plan extra ahead for delayed deployment, I’m left thinking about the design of the whole thing. If there’s such a need for frequent changes of schema, is there not a way to externalize such needs onto a different component, and handling the data in a different way? They seem to be having a constant increase in users, and thus data, so there is probably a limit on the size of the ship one can buy.

Remain productive and happy in an age of digital distraction

Among the more interesting talks, and dare I say relevant to the generation so forthcoming when it comes to technology, was held by Anastasia Dedyukhina, founder of Consciously Digital. Anastasia was really tied to her phone, checking and handling e-mails all day long, in a job where business hours meant nothing, so in order to remain sane she decided to – ditch the smartphone! A problematic, and not knowing it at the time, a very lengthy lifestyle adjustment. Accepting new technologies, moreover, adopting them into our lives, basically means that we’re externalizing a part of our needs to that piece of technology – be that replacing horses with cars, paper mail with electronic mail, hand washing of clothes with machine washing, the technology was always there to support the man and remove difficult and mundane tasks more or less from his life. However, that also means that the humanity, speaking in general, gradually lost its ability to perform at those tasks. Not a problem, indeed, but what happens when you externalize the basic needs and human traits to technology?

Anastasia realized she has a problem when she started feeling the phone vibrating in her pocket even when the phone was not in her pocket. I myself have experienced hearing my phone receive messages, while in reality there were none. Both of these are known as phantom ringing, or ringxiety. Anastasia – and research – have told us that just seeing notifications (e.g. Facebook and Twitter notifications) spikes the dopamine levels in humans, enabling the behavior that constantly hungers for more notifications. The unpredictability of these notifications – who liked my post omg! – spikes the dopamine system, and being that first you have to see that you have notifications, having them at all triggers the dopamine system. This behavior was first described by the famous Pavlov, who learned of it by observing his dog. After some time, the constant stimulation proves to be exhausting, and you end up always craving for more, never at ease and satisfied. Other important side effects are shorter attention span, and broken attention, which result in poor memory and overall lesser efficiency. In essence, having a constant influx of e-mails, especially coupled with notifications, actually reduces worker’s efficiency.

People are generally bad at multitasking (especially the ones that claim that superpower), so having them constantly interrupted at their work only results in poorer work that requires more time to complete and leaves the worker more exhausted. Indeed, Anastasia’s full time job from that point on was to educate companies and individuals about the perils of constant availability. Why do you need to be constantly available? To whom? Why? Is it really important for you to see it immediately? Is it more important to check it now and break your immersion into whatever you were doing? Just to conclude: Anastasia and many others are advocating for everyone to assume responsibility – coders should not code intrusive behavior that can’t be turned off (e.g. like Facebook’s Messenger), workers should assume responsibility about intrusion with e-mails and requests, the businesses should respect business hours and shouldn’t expect from others that they constantly check their work mail, and so on. In essence, we should not abandon the technology, we should use it wisely and have the technology serve us, not the other way around. Curb your techno-enthusiasm.

comments powered by Disqus

news / events / blogs