GSoC Chronicles — Best Kept Code

Pulkit Mishra
9 min readJun 27, 2020

The purpose of this blog is to document the work that I have done in Poor Man’s Rekognition project at CCExtractor in the first two weeks of Google Summer of Code. (TLDR? In the end.)

The work that has been done in these two weeks primarily focuses on improving the quality of the existing codebase and decreasing the technical debt.

Technical debt is in my opinion a beautifully crafted metaphor for all the bad software design choices that turn out to be sub-optimal in the long run and thus result in a huge cost on future development. Worth highlighting is the fact that it’s not just about the code: the architecture, documentation, and tests all of them can cause technical debt. Interesting articles [1][2][3] for further reading on this issue if anyone is interested.

I find it extremely sad and hilarious(yes, both at the same time) that many people do not realize the importance of having a proper process, of having a test framework and writing tests, of involving quality by maintaining style and design consistency — both in code and in documentation as well. Skipping any of these just results in technical debt that keeps on piling on and may eventually bring the entire project down.

(Remember kids, the shortcuts taken today will later slow you down)

With the intent all set for these two weeks, here’s how I put that existing technical debt to rest by paying it all off, once and (hopefully) for all:)

The first step towards reducing the debt would be to bring in style consistency to the codebase. The go-to for this would obviously be PEP-8. For those of you who don’t know, Python Enhancement Proposals or PEPs are essentially a set of documents that provide information to the Python community about new features for Python or its processes or environment. They are also the primary mechanism for collecting community input on issues and documenting Python design decisions. Among these, PEP-8, the document that everyone worships(though maybe they shouldn’t — more on that later) and PEP-257 highlight the conventions for code and documentation respectively.

So should we just adopt PEP-8 word for word? Well, yes but actually no! Why you may wonder! Because PEP-8 says so.

A style guide is about consistency. Consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is the most important.

However, know when to be inconsistent — sometimes style guide recommendations just aren’t applicable. When in doubt, use your best judgment. Look at other examples and decide what looks best. And don’t hesitate to ask!

There are a few things in it such as restrictions on line length which do not make sense anymore (at least to me). Thus, for Poor Man’s Rekognition we will follow PEP-8, but as a guide.

Now, how do we go about codifying these conventions and what tool do we use? You, my dear reader, ask the right questions, at the right time. While I was reading about python style checking I came across three major tools that are being used for this purpose in various open-source organizations: pycodestyle(earlier known as pep8. Read this if you are interested in some tech drama), flake8 and pylint.

Since flake8 is basically built on top of pycodestyle and we are getting pycodestyle anyway if we use flake8, this narrows our choices down to just flake8 and pylint. Pylint is clearly more powerful and gives a much fine-grained control as compared to flake8. That’s great but, therein also lies the problem. Pylint is an overkill. If all we want to do is catch errors like undefined variables and ensure that the code is idiomatic Python, flake8 is the way to go. Other than this flake8 also supports plugins for a plethora of things like docstring style, type checking which Pylint can’t do at all.

So we have chosen flake8 to enforce style consistency across the project. In addition to the default lint checks that come along with it which include the PyCodeStyle project, PyFlakes project, and McCabe complexity check, I have decided to also use pep-8 naming as an added extension to this in order to check for naming conventions.

The entire codebase now follows a strict code style and will follow the same for every pull request that is made. Checking style convention in the project is now as simple as :

flake8 --enable-extensions=pep8-naming

Other than this, two more design-related decisions have been taken. Poor Man’s Rekognition has a constants.py file inside the corelib module and all global constants have been shifted to it. Also, all API calls are supposed to return a dictionary with necessary key values and if for some reason there is an exception that occurs while making the API call it should return a dictionary with error as the key and exact description of the error as the value.

The second thing is setting up a logger. Logging information during the execution of any application can be extremely useful in order to understand what led to a bug or a crash. Not only does it provides way more information than a simple stack trace but it is also helpful in reproducing bugs that have been generated on the user’s end by simply looking at the log files. Further, many issues may relate to the specifics of the hardware or software on a system so logging information like what kind of CPU or GPU the user has or if they are using Windows or some Linux distribution can prove to be essential while debugging problems that only show up on a specific configuration.

Logging is to software what blackboxes are to aeroplanes .

I know maybe you are thinking of using print statements to debug, right. And, yes I use them too. And they work great. BUT! (and as you can see that’s a big but) they only work great as long as your code is small. Moreover, the ability to control the amount of logging information and where to log it just makes logging a much better choice as compared to using print statements.

Now that we have answered the what and why of implementing logging, its time to answer the how.

For this purpose, we decided to go with the builtin logging library of python. Glog and Eliot are amazing in their own right and sure can be explored later. However, for now, python’s inbuilt logger seems to do the job well. We have created a custom logger in order to implement logging for the entire project to facilitate better debugging. A separate logging module has been created in the repository with a custom logger class that inherits from python’s inbuilt logger and overrides its methods like debug, info, warn, and error. Objects of this class are then called in each and every python file and making use of the extra customization that inheriting the logging module provides we are now able to display custom messages and write them to a file as and when needed.
It is worth noting that Info is being used for function calls, Warn is being used for warnings that should be looked into but can be ignored as of now and Error is being used for exceptions that need to be looked into immediately. Debug has been reserved for cases when the developer will need to debug for errors.

The default format for output on both file and terminal is

format="%(asctime)s | %(levelname)s | %(message)s"

and it can be changed while initializing the object in each of the python files. This is a good start for anyone who wishes to explore more about advanced concepts of logging in python. Also, the decision of extending the existing logging module has been taken after much consideration. I acknowledge that this design decision is extremely controversial in the community but for one, the creator of the logging module himself encourages logger inheritance and I believe a lot can be achieved by extending the logging module in the long run by doing this. Attached are blogs and StackOverflow answers that have been a great motivation for this part. [4][5][6]

The third thing was writing tests for the entire Django app. It is important to find the bugs at the earlier stages of development and write unit tests for them so that we don’t have to waste time later on. Moreover, this also helps in identification if a new pull request or commit that is supposed to introduce a new feature doesn’t accidentally break the already existing build. Other than this code coverage can also be measured. This article is an all-encompassing source of knowledge for anyone new to writing tests for Django apps.

Now, as far as writing tests for models and forms is considered, that is pretty basic and is well known and documented. Django has an extensive test framework within itself that builds upon python’s testing framework. The Django documentation for testing is very extensive and easy to follow. The basic rule here is to write a file that begins with test and the convention is to add to it the name of the module that the file is going to contain tests for. Now, in this file, we are supposed to make classes whose names begin with test and followed by the name of the class it is supposed to be a test for. Each of these classes will have a setup function that will run once and as the name says will setup the initial state of the database. Then there will be more functions that will run some commands and then test the final state of the database and the values returned against the expected state of the database and value after making those commands. It is worth noting that a virtual database entry that’s created for these tests automatically gets destroyed after these tests have been executed.

However, writing test cases for the REST API especially for functions that make a call to a deep learning model is definitely a bit more tricky and will need the rest framework API to be put to use. This repository served as a good example of how this should be done and it will not be wrong to say that it has been a guiding light when it came to writing tests for views and main_api of PMR. It is worth highlighting that due to the structure of Poor Man’s Rekognition, tests for views are very closely knit to the tests for main_api as the views module essentially makes calls to methods in the main_api module internally. API Client from rest framework’s testing submodule has been used which replicates an API call and returns the response code and result. These codes are then evaluated to determine if the test is to be considered a pass or fail. There were some hiccups while writing tests for the similar image search API as it has different file names than say NSFW classifier or emotion recognition but it has been taken care of.

An important thing to note here is that all tests have been placed in a tests module and all future tests are also supposed to go right here. All tests for any other module are supposed to be placed inside this module following the rules and conventions as stated above. Testing the three most important pasts of the codebase: views, forms, main_api, and models is as simple as running this command.

python manage.py test tests

Lastly, it was important to make calls to the deep learning models independent of Django so that code for REST API can be shipped as a python package via PIP. This was very easy to do by separating the closely tied (as explained above) main_api and views module. Initially, views module on finding a legitimate request used to send it directly to the relevant main_api method. However, now only the necessary information is sent to the main_api method. Besides clearing the pathway for converting the repository to a PIP package it also makes testing code in main_api and views extremely easy.

Other than this there were several bugs such as those pertaining to the feedback feature where multiple names were being returned. There was a logger warning in place but nothing was happening to handle such a case. Now, in case of multiple names getting returned, the first one is treated as the correct suggestion and rest are being ignored.

Further, the error handling in other places has been made more specific especially when making requests to the TensorFlow serving and saving results to models. As opposed to earlier when only the general exception was being caught now HTTPError, ConnectionError, Timeout, TooManyRedirects, and RequestException all of these are caught and handles while making API requests and IntegrityError and DatabaseError are handled when dealing with changes in the database.

A lot of work has been done and almost everything that was mentioned in the proposal for these two weeks has been achieved. One small thing remains that involves pushing the newly written tests to Travis. Other than this of course making the documentation better, writing better integration tests, and writing better exception handling statements is an everlasting process and I will keep on doing that whenever I find time in the coming weeks in this journey towards maintaining the Best Kept Code.

TLDR -

Work Done

  1. Making existing code PEP8 compliant (Linked PR: 76)
  2. Setting up logger (Linked PR: 79 and 81)
  3. Writing Test Cases for the existing code (Linked PR: 84)
  4. Making calls to deep learning models independent of Django (Linked PR: 86)
  5. Several Bug fixes, making error handling statements more specific and fixing documentation (Spread over several PRs)

Work Remaining/Continuous Work

  1. Tests Need to be pushed to Travis (Issue: 95)
  2. Better Exception Handling (Issue: 96)
  3. Documentation (Issue: 64)

--

--