TechNight: Code Quality

  • 05/03/2019
  • 11 minuten leestijd

TechNight: Code Quality

On February 20th 2019 Jarra Schirris gave an inspiring but abstract presentation at developers.nl on how to improve Code Quality. In this blog I will discuss Jarra’s ideas in more detail, as an hour was a bit short for him to go deeply into all of the discussed topics.

Additionally, Marc Dekker, one of our Java team members, will give a practically oriented talk by the end this year that will include a life coding session to show how our ideas on Code Quality can be implemented. I hope you all will be able to attend this meeting.

Reasons for bad code and a new approach to improve it

When starting the talk, Jarra argues that the foremost reason we produce bad code is haste. We as developers are always under pressure to deliver changes ASAP, within an often too short-set time period, preferably handling both new features and any last minute bug fixes at the same time. This time pressure often forces developers to deviate from standard practices that take time to implement. If the standard practices of enforcing code quality so that our fellow developers can maintain and elaborate on our code are not sufficient to guarantee quality in times of stress, what can we do to ensure understandable code?

Jarra proposes to go back to the fundamentals of language for information transfer. That is, how to use the written word to best communicate your intentions. How to adequately transfer meaning, purpose and goals in the code itself and its structure. IMHO, a completely different perspective than another list of “best practices” that only take in additional time and stress.

Chunking theory, mental models and their relation to understanding complex systems

Next in the presentation was a quick test of how many words the viewers could remember over a short period of time. It’s purpose was to show the limited capacity of our short-term (working) memory; that is, how many chunks of information we can keep retained simultaneously. For most people this number lies between 6 and 9 (Miller, 1956)[1]. Interestingly, we had several viewers that indicated that they had written down more than 9 of the words. An exceptional performance if correct! Chunking theory in psychology states that chunks are coherent elements that can be grouped together based on an association. For instance; Chase and Simon (1973)[2] and later Gobet, Retschitzki and de Voogt (2004)[3] showed that chunking could explain several phenomena linked to expertise in chess. Following a brief exposure to pieces on a chess board, skilled chess players were able to encode and recall much larger chunks than novice chess players. However, this effect is mediated by specific knowledge of the rules of chess; when pieces were distributed randomly (including scenarios that were not common or allowed in real games), the difference in chunk size between skilled and novice chess players was significantly reduced. Removing the common ground between the elements, that is their association to the rules, removed the possibility of using chunking to retain more information.

This capacity limit of our working memory makes it difficult and slow to understand highly complex systems because they more often than not consist of more than 7 cohesive pieces of information that can be interrelated in different ways. An example of a system that proved to be too difficult to maintain and bug fix because of its size and complexity was the software implemented in Toyota cars. Toyota modern cars started accelerating without the gas being pressed, causing accidents. Various experts looked at the code but it was such a mess they could not explain why it happened. But more importantly, they could also not rule out that the incidents were not created by the software. There was too much badly documented unreadable complex code to oversee all its objects and their relations, and so no way to predict the resulting effect on the cars’ driving performance.

Over time such large and complex code systems start to show similarities to the dynamics of natural systems rather mathematical ones (Samuel Arbesman, 2016). That is, extensions and added dependencies will not only be improvements on the current code, but also be adaptations to a changing input context. The more interdependencies are added, the less transparent and predictable the output of a system will be given the input. This is because we can only oversee a limited amount of relationships between the system’s elements.

The way information is presented influences our representation of it

Jarra proceeds on discussing an indian ritual that was last performed in the 70's. Historical priest were constantly rehearsing and perfecting their rituals as it’s success depended on the experience and expertise of the priest performing the ritual. When the first rituals were put onto paper our relationship to god, and to the other participants or observers, was changed. Writing down the ritual, disconnects the delivered content from the body language and identity of the performer and thus removes a significant amount of the information from the message. Script as medium influences the perceived reliability and validity of it’s content. Since the content is less detailed and its source unclear, people tend to perceive written texts as more accurate and more official. It enables the belief in singular, disembodied, authoritative forms of being such as God and the psyche (Karman). It also causes us to interpret cause and effect mainly in a linear way. These days, information is distributed across networks, presented using multimedia, and with a less distinct boundary between personal and public information. This modern way of storing, presenting and experiencing information, will again produces a shift in how people perceive and process reality. The current receiving of information and learning is again more immersive, more similar to the verbally presented rituals than the written word. As a consequence, we can be expected to create a less linear, more distributed and more detailed construct of reality, but also a change in attitude in which we accept that we cannot possibly process all information available on a topic and find more adequate ways to deal with this. In a first attempt, Brian Rotman (2008)[5] argues that we should treat the current vast and complex computer systems such as that of Toyota cars more like an anthropologist; studying the system like it is a black box and describing it’s dynamics simply by which input results in what output. Preferably using statistics and mathematical models.

As an illustration of successfully treating a complex and buggy system as a blackbox to adequately explain and predict its behaviour, Jarra moves on to discussing the story of a Soviet army officer, Lt. Col. Stanislav Petrov. On September 26, 1983, amidst the cold war, this Soviet officer detected 5 missiles on his radar screen. All incoming from the U.S.A.. Immediately responding and reporting their detection to higher up command would -at that time- have had a probability of resulting in a global nuclear war with casualties up to in the hundreds of millions. The Soviet protocol would leave no room for double-checking the system, nor negotiations with the US. More importantly, the staff had experienced false alarms for missiles the months before, showing that the system might not be 100% reliable. This is why Stanislav hesitated and waited to respond until he had more information. That is, he and the rest of the staff decide to wait and see whether more information would pop up on the screens or be relayed to them by phone and decide later. Nothing happened in three minutes time and the staff concluded that the missiles on the screen were indeed glitches. False alarms. After investigation, the system was shown to mistaken the sun’s reflection off clouds as missiles. For preventing these deaths Stanislav got honored at the United Nations and received the Dresden Peace Prize. This example not only shows why it is important for a system to be both accurate and reliable, but also how having info on these issues is necessary to interpret its behaviour and predict what actions to take. Improving code quality thus not only encompasses the code itself but also taking proper care of documentation about its behaviour and its communication to end users.

However, to be properly able to do so, the creator of the system needs to fully understand the problem that gave rise to it and have the full structure of its solution in code available in his or her head (Paul Graham, 2009). This is more doable for complex systems if the source code intuitively explains itself and human capacity to oversee a complex system is taken into account. How do we achieve this?

Applying logic to this question suggests two possible perspectives… Either we make the system simpler by forgetting parts of it, that is, abstracting the gist of the system and forgetting about non important details. Or we can become cleverer so that the things that first seemed incomprehensible become clear to us (Eugenia Cheng, 2018)[7]. Jarra approach discussed below that is based on combing OOP, functional programming concepts and chunking theory from psychology implements both.

Creating intuitive systems that are easy to comprehend and oversee

Composition Composition breaks down a vast complex system in its meaningful components so that it becomes easier to understand and oversee. Each composite has a single purpose, and their combined interaction produces the wanted behaviour. Composition is very intuitive, as breaking down information in parts, processing these separately and later combining the results, is the way the brain itself deals with the information coming in from the different senses when constructing a percept of the external world. Either to simply perceive, or to also interact with it. Composition is also inline with the way information is currently distributed over the internet, and the way we search and find, interact and interpret this information when problem-solving. For instance, when finding the most optimal way to code a solution, we consult different web sources, combine their suggested solutions into one and test out whether or not this works for our situation.

Object Oriented Programming One way to implement composition, is ofcourse Object Oriented Programming, first proposed by Alan Kay in 1966, in which the system is represented by groups of instantiated classes or nodes that each have a single identity and purpose, and communicate with one another to create the information flow that represents the business process for which the system was designed. The nodes can inherit each other’s characteristics and actions, forming categories within a cluster. Alan Kay has a background in Biology, and the way he designed OOP closely resembles the way neural networks and neurons process information.

In OOP the content of each node is shielded, so that you do not have to worry about what it is inside the object (Alan Kay). The overall functionality of OOP systems is often defined more by the messaging between it’s objects, than within the elements themselves. These should be easily substitutable (low coupling), but changing the messaging will result in big changes in the overall functioning of the system (as a result of high cohesion, Donella Meadows, 2008)[8]. Large systems lacking in (proper) cohesion will have a high information flow complexity as they are likely to have more connections due to a higher amount of components performing similar but slightly different operations (Sallie Henry and Dennis Kafura, 1981)[9]. As a result, such systems will be harder to understand and maintain because of the sheer number of information flows (Eric Elliott, 2019). This is problematic because the number of flows often exceeds our mental capacity to process the chunks simultaneously (plusminus 7, George Miller, 1957).

Jarra argues we should strictly adhere to OOP principles, creating applications with high cohesion but low coupling, and each instantiation serving only one single purpose, when implementing a system. This gets more important as the size of the system increases, as information flow complexity will have a bigger impact on our understanding if we near the limits of our capacity to chunk information when building a mental model of the system.

Pure functions in monads A way to reduce information flow complexity and sheer size of a system is by using pure functions when designing and implementing it. A concept borrowed from functional programming, pure functions are methods that are 100% predictable, in that they always give the same output for a given input, whatever the context may be, and produce no side effects. Pure functions produce no unexpected behaviour when called and they can be substituted by their own product. Their output should be the same regardless of the type and value of the input. Pure functions reduce information flow complexity by minimizing the output flows and thus make the system easier to comprehend and maintain.

Pure functions can be implemented by using monads. A monad is a design pattern that allows to structure a program more generically by automating away the boilerplate code needed by the program logic. It achieves this by accepting any type of input (also null, thus reducing the number of methods needed in boilerplate code), wrapping the input into a single data type of itself, resulting in a monadic value, and processing this monadic value using a procedure that composes functions that process the data within the wrapper but again output monadic values. So instead of implementing boilerplate code into different objects, a category object is created that handles any possible input and shields the internal boilerplate code from the rest of the system. Effectively chunking the information so that the system’s size reduces and becomes easier to understand.

Monads not only accomplish, with pure functions, what normally is done with side effects in imperative programming but they also do it with a high degree of control and type safety (Eugenio Moggi, 1989). When using pure functions in monads is not possible, side-effects effects should be isolated from function that are 100% predictable to increase transparency.

Test Driven Design: States and Messaging by using Mock Objects Finally, as the number of objects and dependencies between objects within the system reduces because similar functionality is encapsulated and processed by pure functions in monads, it becomes more important to test whether the monads and their connections really perform in all situations as expected. For this not only unit testing of the involved classes needs to be done, to check the different states of the objects, but also their interdependencies should be mocked and tested for correctness and reliability of the interactions and the information flow between the objects (Freeman & Pryce, 2009)[10].

Conclusion

Jarra finishes the talk by discussing a 3D fractal image and explaining that it symbolizes his ideal architecture in which all possible chunks (lines, classes, dependencies, modules, applications) are maximized on the number 7. Ensuring that both higher level and lower level order within the system takes the limits of human processing capacity into account. A nice idea.

Afterword

I think we all can benefit from more intuitive code & structure as the time we can spend on documentation and knowledge transfer is severely limited. Most complex systems that I have been working on suffer from evolution through changing context, and as a large part of our work encompasses maintenance, reduced complexity for large systems will make my life a lot easier. Certainly as integration issues between systems arise.

I hope to have given Jarra’s abstract concepts more body. Thank you for taking the time to read this, if you have any questions, don’t hesitate to contact me.

  1. Miller, G. A. (1956). "The magical number seven, plus or minus two: Some limits on our capacity for processing information". Psychological Review. 63 (2): 81–97. CiteSeerX 10.1.1.308.8071. doi:10.1037/h0043158. PMID 13310704.
  2. Chase, W.G.; & Simon, H.A. (1973). Perception in chess. Cognitive Psychology, 4, 55-81.
  3. Gobet, F.; de Voogt, A.J.; & Retschitzki, J. (2004). Moves in mind: The psychology of board games. Hove, UK: Psychology Press.
  4. Arbesman, S. (2016). “Overcomplicated”. Portfolio
  5. Rotman, B. (2008). Becoming Beside Ourselves: The Alphabet, Ghosts, and Distributed Human Being. London: Duke University Press
  6. Graham, P. (2009). Hackers and Painters: Big Ideas from the Computer Age. O’Reilly Media
  7. Cheng, E. (2018). The Art of Logic. How to make sense in a world that Doesn’t. Profile books
  8. Meadows, D.H. (2008). Thinking in Systems. Chelsea Green Publishing Co.
  9. Henry, S. and Kafura. D. (1981). Software Structure Metrics Based On Information Flow IEEE Transactions on Software Engineering SE-7(5):510 - 518
  10. Steve Freeman, Nat Pryce (2009) Growing Object-Oriented Software, Guided by Tests. Addison-Wesley Professional