Python Import Standards

Detailed documentation(PEP328) on Python’s importing standards are provided here.

In this post, we will go over the following concepts:

  1. Multi-line import
  2. Absolute import
  3. Relative import

Multi-line import

The first rule is that multi-line imports can be wrapped with a paranthesis.

from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,

Absolute import

If you have a module named ‘’, when you do,

import string

you would not know whether the Python’s standard ‘’ library is being imported or your private module is being imported. The absolute imports are recommended to avoid this ambiguity.

In Python 3, the absolute import is set to the default. However, in Python 2, you need to manually enable the absolute import.

from __future__ import absolute_import

In the following package structure,


when you want to import a function in from, you would use,

from package.subpackage1.moduleX import a_function

Even though this import statement is residing within, Python import will search upward the directory structure until you find a package named ‘package’, and traverse down.

This absolute import syntax is more explicit and readable, and it has been suggested to use absolute imports over the traditional relative import

Relative import

When you use relative imports, the import statement does not require the actual name of package structure anymore. It instead focuses on the physical structure of the package.

In the following package structure, package/ subpackage1/ subpackage2/

Assuming that the current file is either or subpackage1/ , following are correct usages of the relative imports:

from .moduleY import spam
from .moduleY import spam as ham
from . import moduleY
from ..subpackage1 import moduleY
from ..subpackage2.moduleZ import eggs
from ..moduleA import foo
from ...package import bar

There are some cases where a relative import has benefits compared to an absolute import. The most important of which is being able to rearrange the structure of large packages without having to edit sub-packages. This is actually highly encouraged in building Django web application, since you can use your Django application in a modular manner.



Django + Gunicorn + NGINX Deployment

Updated from my previous post

Pain in Django deployment

Django is a python based web framework known as a batteries-included web framework. Development on Django is pretty straight-forward once you understand the MTV style of the web framework design. However, the biggest struggle I experienced was the deployment step. In this blog post, I would like to note down key concepts of Django system, and why Gunicorn and NGINX are used together.

Why use Gunicorn and NGINX together

First, for all Python based web applications, WSGI (Web Server Gateway Interface) is basically used to connect all requests with your python web applications. When you run the Django development server, it is actually based on WSGI server implementation. Gunicorn is one of many other implementation of the WSGI web server. When Gunicorn web server is running, its worker processes handles all the incoming and outgoing requests. If Gunicorn can handle all the requests, why do we even need NGINX?

Before we begin why NGINX is necessary in Django deployment, let’s go through what NGINX is. NGINX is the most well-known robust web server which can effectively handle multiple requests, SSL encryption, load-balancing, reverse-proxy, providing static assets and many more. NGINX can be thought as the first gate-keeper of your server. In Django deployment, NGINX is used for all of the aforementioned features. Since Gunicorn’s sole purpose is to serve an application, static files such as css and javascript files must be served through other medium such as NGINX.

How Django + Gunicorn + NGINX looks like

Like I pointed out earlier, NGINX is the first gate-keeper. NGINX first intercepts all incoming requests. We then port forward all incoming requests to port 80 toward Gunicorn web server by setting a reverse proxy setting. When the forwarded requests get to Gunicorn, worker processes trigger the actual Python logic in our Django application.

When Django application sends a response back to the client, the client browser might ask for static assets from the backend. These static asset requests’ URL would start with /static/ prefix which triggers NGINX to serve the relative static assets from the configured directory.

Django deployment setting

When you are running the Django development server, all static asset serving is done automatically. However, in the deployment stage, you want to configure all static assets to be handled by another web server.

The key concepts in Django deployment is setting up how your static assets’ URL will be formed, and setting up where you would like to collect all the static contents for the deployment stage.

In your,

DEBUG = False

# Static files (CSS, JavaScript, Images)
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static/'),)
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticroot/')
  • DEBUG: You would never want leak out stack level debug messages to the clients for security purpose.
  • STATICFILES_DIRS: This is the directory you will be using for static files in the development stage
  • STATIC_URL: This is the URL prefix which will be used for all static contents. NGINX needs to know what this URL would look like, in order to redirect client requests to the static assets directly.
  • STATIC_ROOT: This is the directory where you will tell Django to collect all of the application specific static files in one place. This directory will be configured on NGINX to serve the static files directly from NGINX.

After updating your, run

python collectstatic

This command will populate all your Django application related static assets including the Django admin static files in the directory you provided in STATIC_ROOT.

Now that Django is good to go, how do we set up Gunicorn to be our WSGI web server for Django?

Gunicorn setup

Install Gunicorn through pip. I would recommend you installing it inside your virtual environment.

pip install gunicorn

Locate your file in the Django application directory, and start a Gunicorn server.

gunicorn wsgi:application&

This will automatically run your web server listening at http://localhost:8000. What this command does is, Gunicorn finds our Django application instance inside the file and runs it in detached mode.

At this moment, in your local machine, you can start accessing your Django application at http://localhost.8000. Since it is running in a private ip, no other hosts in the same subnet can access it yet. Also, static assets will not be loading yet.

This is where NGINX comes in.

NGINX setup

When you are configuring NGINX, you need to do two things

  • Port forward all http requests (port 80) to where Gunicorn server is listening to
  • Set all /static/ URL prefix to be directed to static contents directly

You can install NGINX through apt-get.

sudo apt-get install nginx

Go into /etc/nginx/sites-available, modify default file using

sudo vim default

Modify the file.

server {
  listen 80;
  server_name localhost;
  location / {

  location /static/ {
    alias /home/pi/Projects/home_automation_project/home_automation/staticroot/;

What this configuration tells you is that NGINX will be listening to port 80(http request port) and pass all the traffic to which Gunicorn is listening to.

The file directory set in /static/ alias should be the absolute path of the directory you configured in Django’s STATIC_ROOT variable.

After saving, run

sudo service nginx restart


For further robust deployment

This blog post only covered how Django can be served to the client and how its static contents can be served at the deployment stage. SSL encryption would be the next thing to consider to mask the content of the requests/responses in the course of routing from the front-end to the back-end and vice versa. Another thing would be automatic NGINX and Gunicorn process start after a reboot event.

Sunrise LED Strip Simulation Using Raspberry Pi

What to buy

  1. Raspberry Pi
  2. Adafruit digital RGB LED strip
  3. 5V power block
  4. power adapter
  5. breadboard and GPIO cobbler

When you are looking for LED strips, you will find two types of LED strips: digital RGB LED strip and typical RGB LED strip. The difference between these two types of LED strips is that digital RGB need two input into the strip for the light control whereas typical RGB strip needs three PWM controlled inputs into the strip. Adafruit’s digital RGB LED strip ( provides PWM control embedded in each LED chipset and you do not have to worry about PWM output control from your Raspberry Pi.

When powering the LED strip, you need a separate power sources for Raspberry Pi and your LED strip. LED strip must be powered by a separate 5V power block ( Since Raspberry Pi and LED strip has to be gounded together, power adapter ( which separates power and ground cable allows you to connect the ground with ease.

The circuit diagram is as follows.


Raspberry Pi and LED strip connections

To control the LED strip through Raspberry Pi, you need to connect to SPI pins on GPIO. MOSI from Raspberry Pi gets connected to DI on the LED strip. SCLK from Raspberry Pi gets connected to CI on the LED strip. Make sure you are not connecting to DO/CO on the LED strip which is the output end.

More detailed explanation is provided here (

Python Dependency

BiblioPixel library provides simple implementation of LED strip control. For SPI control from Raspberry Pi, you need to first enable hardware SPI control by entering rasp-config from command line. All pip dependencies are as followed.


Code Implementation

There were two parts to the code implementation.

  1. turning LED lights on gradually from red to bright white color.
  2. listening to button press to turn off the light

My initial attempt was using threading module to create two threads which do aformentioned tasks. Threading was implemented but I had to change my implementation due to two reasons.

  1. Threading to turn on LED lights does not give you precise timing control since threading event depends on the scheduler for execution.
  2. Button listener throwing an exception at a button click event could not be caught from the main process. Child thread gets its own stack and exception handling between the main thread and the child thread was not simple enough.

Therefore, I had to just simulate threading procedure where I continuously poll for button press after each LED light update. This seemed to be working but putting time.sleep() made button listener to not function as well when python is in sleep. I then broke down time.sleep() into small frames of time.sleep() + button polling to allow button listening to function even at waiting stage.

My final attempt will involve LED lighting implemented with multiprocessing.Process() and constant polling for the button press in a while loop in the main process. This way multiprocessing will allow constant rate of change in LED lighting, be able to kill the process at a button press, and turn off all the lights at the end.

from __future__ import division
import time
import sys
import multiprocessing

import RPi.GPIO as GPIO

from bibliopixel.led import *
from bibliopixel.drivers.LPD8806 import *


class SunriseSimulator(multiprocessing.Process):
	"""This class is to interact with LPD8806 digital RGB strip simulating sunrise effect"""
	def __init__(self, name):
		multiprocessing.Process.__init__(self) = name
		self.driver = DriverLPD8806(NUMBER_OF_LED,c_order=ChannelOrder.GRB)
		self.led = LEDStrip(self.driver)

	def run(self):
		"""turns on all led bulbs gradually from dim red to bright white light"""
		driver = self.driver
		led = self.led
		#RGB pixel status
		#Gradual red light increase
		for red in range(256):
			r = red
			print (r,g,b)
		#green and blue light intensity increase	
		for n in range(256):
			g = n
			print (r,g,b)
			b = n
			print (r,g,b)

		while True:

	def turn_off_led(self):
		"""turns off all the led lights"""

class ButtonListener():
	def __init__(self):
	def activate_button(self):
		GPIO.setup(18,GPIO.IN, pull_up_down=GPIO.PUD_UP)
	def button_is_clicked(self):
		input_state = GPIO.input(18)
		if input_state == False:
			print 'Button is clicked'
			return True
		return False

if __name__ == "__main__":
		#initialize led controller and button listener
		sunrise_instance = SunriseSimulator("LED_Controller")
		button_listener = ButtonListener()
		#start turning on led with seperate process
		#listen for button click event
		while not button_listener.button_is_clicked():
		#terminate the process run and turn off all the light

	except KeyboardInterrupt:
The following video shows the LED strip triggered by a HTTP request through Django web framework and turned off with a button press.

Scheduled Task

When a registered user submits their alarm schedule through html form, form data is rendered to Django model which is then associated with the user model. After linking the schedule to the user, python-crontab module is used to modify Linux crontab. For the current implementation, it will remove all the previous cron job and write in a newly scheduled job which will execute the LED control python code.

def update_crontab(minute, hour, day, command):
    day_map = { "1" : "MON",
                "2" : "TUE",
                "3" : "WED",
                "4" : "THU",
                "5" : "FRI",
                "6" : "SAT",
                "7" : "SUN"}
    users_cron = CronTab(user='pi')
    job =, comment='LED lighting schedule')

Alarm Clock Project Summary

Why I started this project

Last November in 2015, I started an IoT project which simulates a sunrise event at a scheduled time. I really wanted a practical project which I can use in my daily life to enhance the quality of my life.

While researching for viable solutions, Philips Hue smart LED bulb, LIFX LED bulb, and various choices were found. I did not choose either of these choices for few reasons. To use a Philips smart bulbs, you require a central hub which allows you to connect to it for controlling all the smart bulbs. LIFX has integrated wi-fi module built into the bulb. Both choices were too expensive to start with, and I didn’t want something that is already built out of the box. I wanted more hands on action which would give me a full control of my soon-to-be sunrise alarm.


After few weeks of researching, I decided to use a Raspberry Pi as my web-server which controls a connected RGB LED light strip. Following technology were used for this project

  • Raspberry Pi: ARM based linux computer
  • Adafruit RGB LED strip: controllable via GPIO on Raspberry Pi
  • NGINX: web server
  • Gunicorn: WSGI proxy web server
  • Django: alarm web application
  • MySQL: record keeper
  • Bootstrap: UI design

The github repository for this code is provided here.


This youtube clip shows a sped-up version of the sunrise simulation for the demo purpose.

A simple authentication page is present to limit other users from triggering or scheduling my alarm.


Currently, it shows past scheduling history for a development purpose.


Simple alarm configuration page is displayed.



To be added

Even though this IoT is still usable, I would like to add few more features into the application.

  • For the user to configure multiple alarms.
  • Display only active alarms on the status page.
  • Allow the user to delete/deactivate selected alarm.
  • NGINX and Gunicorn to auto-start at an event of a crash.
  • Web scrap newsfeed and voice it out over a speaker when the alarm triggers.

REST Architecture Best Practice

Why Use REST Architecture:

For a new project I will be working on, REST architecture is to be used to mainly decouple the front-end and the back-end. By doing this, front-end may be expended from a web application to a mobile application in the future.

The Six REST Architecture Constraints:

  • Uniform interface
    • Decouples the front-end and the back-end.
    • Resources are displayed as a representation (HTML, XML or JSON), and resources are manipulatable by the clients.
    • HATEOAS !!!
  • Stateless
    • Server does not maintain a session.
    • The client must include all information for the server to fulfill a request. (resending state as necessary if that state must span multiple requests)
  • Cacheable
    • Responses must be implicitly or explicitly define themselves as cacheable. This eliminates some client-server interactions, further improving scalability and performance.
  • Client-Server
    • Client is not concerned with data storage.
    • Server is not concerned with the user interface or user state.
    • As long as the interface is not altered, servers and clients may also be replaced and developed independently.
  • Layered System
    • A client cannot tell whether it is connected directly to the end-system or not.
    • Intermediary servers may improve system scalability.
  • Code on Demand (optional)
    • Servers may transfer logic such as Java applet or Javascript codes to the client to temporarily extend or customize the functionality.

If any of the non-optional constraints are broken, the service cannot strictly be referred to as RESTful.

REST Tips:

  • Use HTTP verbs (GET, POST, PUT and DELETE) to mean something.
  • Resource names should be nouns.
  • Create fine-grained resources. Start providing CRUD functionality for small resources, and move on to aggregated functions.
  • Consider connectedness. Provide hypermedia linkes in the response. !!!

Blogging migration from Jekyll to WordPress

Why I started Blogging

Over a year ago, I have decided to start a personal blogging website to keep a track of my development study. I needed a blogging platform which allows me to easily create new posts without any hassles of maintaining a blog. After few days of research,’s Jekyll and’s WordPress came to consideration.

Jekyll vs WordPress

Jekyll Pros

  • provides 1 free hosting of a Jekyll project.
  • You can easily clone other people’s github repository, and start using pre-made website as your website template.
  • Jekyll project is very straight forward in terms of the project structure.

Jekyll Cons

  • Posts are written in markdown text format. It is very versatile, yet, writing a post is troublesome when you have not yet memorized the markdown syntax.
  • Enabling tag categorization requires you to create a Liquid template page which can be annoying when you simply want to write a post.
  • Proper development setup requires working with command-line tools such as git, gem, jekyll and bundler. If you are not familiar with comman-line tools, Jekyll can have a high learning curve.

WordPress Pros

  • It is a true CMS (content management system). Once you have your desired theme and menu appearance set up, you only need to start writing your blog posts from then on.
  • allows free hosting of one web site. This eliminates the deployment step in a typical web development cycle.
  • You can search, manage and modify free themes from the GUI.
  • Dockerized WordPress is very easy to spin up your first WordPress website

WordPress Cons

  • Unless you are hosting your own WordPress website on your personal server, modifying your free theme is very limited.
  • When you are hosting it on your own server, SSL should be already setup prior to deploying the website. TLS encryption is required to secure your login step to hide the login password from the world.

Why I decided to migrate from Jekyll to WordPress after one year

After comparing between the two, I was really attracted to’s easiness in deployment. All I wanted back then was to simply write. I cloned an already existing template from other person’s github repository, and my new blog was ready to go right away.

About a year has been past, and my blogging requirement has changed a bit. I wanted to categorize and manage posts with more ease. Since I was planning on using the Amazon AWS, WordPress seemed to be a nice self-manageable blogging platform.

Future Plan

In the near future, I will deploy a self-maintained WordPress website on AWS. I have not yet set up proper SSL on my AWS EC2 instance, so I will be using the’s free hosting service as a temporary solution.

On the other hand, on my local environment, I used Docker to spin up my WordPress development environment. Dockerfile used for my blog can be found from my Github repository. Docker innately matches the development environment and the deployed environment, which means I can deploy my blog right away once the initial configuration is finished on my development environment.