Fast static website creation and deployment
Over the past few years, I’ve been building static websites from starter CSS frameworks such as Bootstrap and Foundation,
Software engineering blog for mobile and web
Over the past few years, I’ve been building static websites from starter CSS frameworks such as Bootstrap and Foundation,
Get this custom version of iTunes from https://support.apple.com/en-gb/HT208079
For further reading, look at this article from 9to5mac.com.
In this case, the mobile apps are calling into the backend service without using Django’s cookie or session authentication; instead, we use Django Rest Knox for token authentication (Django REST Framework’s builtin token authentication should not be used because it is a unencrypted single token).
The Nginx configuration has to be configured to allow all of GET, POST, PUT, PATCH, and DELETE methods from a non-web client. The configuration is inspired by other developers posting their Nginx configurations in Github, and we’ve extended it here:
https://gist.github.com/rhfung/12fced0c159572f5207a2da5b6ecdab1
I have an existing Angular 1.x web application that I transpile using Webpack 1. Since Webpack 2.2 has entered into release candidates with no further features to add as of December 14, 2016, it’s the right time to upgrade my web application. To upgrade from Webpack 1 to Webpack 2, I followed the the migration notes:
https://webpack.js.org/guides/migrating/
And I referred to the configuration docs for Webpack 2:
https://webpack.js.org/configuration/
The upgrade took less than 2 hours to complete for my project. In addition to upgrading webpack, I had to update a few dependencies (sass-loader, babel-loader) and get some directly from the github repository (extract-text-webpack-plugin) since the Webpack 2.2-compatible release hasn’t launched yet.
Initially I ran into a problem where Webpack didn’t run properly, but removing node_modules directory and running npm install solved that problem.
Overall the consistency of the configuration file is better than in Webpack 1.
To solve the problem, in our Dockerfile we need to configure NPM settings:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install --reinstall -y ca-certificates && \
update-ca-certificates && \
curl -sL https://deb.nodesource.com/setup_4.x | bash
RUN npm config set registry http://registry.npmjs.org/ && \
npm config set strict-ssl false && \
npm config set maxSockets 8 && \
npm install --unsafe-perm --allow-root --ignore-scripts -d
# ...
|
Make sure to remove old installations of Python, if previously installed on your system, except for the one installed by Mac OS X. For example, if we previously installed Python from homebrew, then the uninstall commands are:
1
2
3
|
brew uninstall virtualenv
brew uninstall virtualenvwrapper
brew uninstall python
|
If you installed python via the OS’s python (for example, the one coming from Mac), then:
1 |
sudo pip uninstall virtualenv virtualenvwrapper
|
A side effect of removing previous Python installations is also removing project-specific virtualenv‘s. If you have created requirements.txt files for these environments, it should be straightforward to rebuild them later.
Since many versions of Python have been shipped, it’s wise to choose a recent installation of 2.7 and 3.5. We’ll use a environment tool to properly build and install the right version of Python. We will be using pyenv tool. We don’t use homebrew because it doesn’t list all available versions.
1. Install homebrew for Mac if you don’t have it.
1 |
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
2. Install pyenv using homebrew:
1
2
3
4
|
brew update
brew install pyenv
brew install pyenv-virtualenv
brew install pyenv-virtualenvwrapper
|
3. Let’s configure Pyenv correctly in your terminal’s ~/.bash_profile (~/.bashrc on Ubuntu). Add the following lines if not already there in the file:
1
2
3
4
|
# Python from pyenv
if which pyenv > /dev/null; then eval "$(pyenv init -)"; fi
if which pyenv-virtualenv-init > /dev/null; then eval "$(pyenv virtualenv-init -)"; fi
pyenv virtualenvwrapper
|
4. After installing the prerequisites, now we’ll use pyenv to install Python. Please choose one of the following steps to follow, depending on which Python you wish to install. Also, if you are using certain packages such as bcrypt, use the alternative install steps.
Regular install steps for Python 2.7.
1
2
3
|
# Python 2
pyenv install 2.7.10
pyenv global 2.7.10
|
Steps for installing Python with bcrypt package (see bug report).
1
2
3
|
# Python 2
PYTHON_CONFIGURE_OPTS="--enable-unicode=ucs2" pyenv install 2.7.10
pyenv global 2.7.10
|
Regular install steps for Python 3.5.
1
2
3
|
# Python 3
pyenv install 3.5.1
pyenv global 3.5.1
|
5. To configure virtual environments for project-specific Python dependencies:
1
2
3
4
5
6
|
mkvirtualenv <your env name>
workon <your env name>
# … Do your work ...
source deactivate
|
Gotcha: We have to use the pyenv version of virtualenv and virtualenvwrapper. The non-pyenv version won’t work: it will stop your terminal from opening (See bug report).
Programmers conduct code reviews to ensure best practices are followed. Code reviews are useful primarily to ensure good coding patterns are followed. Good coding patterns include defensive code to handle invalid input, separation of logic, meaningful variable names, and using proper libraries. Most importantly, code reviews allows a programmer to check assumptions made by other programmers. Contrary to popular belief, code reviews do not reliably find software defects.
Most software defects are found by testing. Testing runs the software in a controlled environment. Different inputs are provided to the software and checked against expected output. The software should also be checked against invalid input to ensure it gracefully fails. Testing these input and outputs can be performed by manual or automated testing.
Manual testing only verifies that the software works at a specific instance in time; after the next code change is made, the tests will have to be performed again. Given that software is changed several times a month, if not every day, significant time and effort is required for manual testing.
Automated testing ensures that the software continues to work after any code change. Automated testing can be performed by unit tests or integration tests. Unit tests check the smallest chunks of code and integrations tests check entire end-to-end systems. Although there is a significant time to write automated tests initially, it saves time in the long run by making new feature development more stable without breaking existing code.
A specific type of test is a canary test, which has a higher degree of confidence than automated testing. The software is tested against a copy of real-world inputs. An engineer’s set of test inputs might have missed some assumptions of real-world inputs, which will be caught in this type of testing. For example, canary testing can test algorithm speed and memory usage. The limitations to canary testing are computational resources and company approval to access customer data.
Unit and integration tests is well known in the software engineering community. Unit tests run subsystems in isolation of other running subsystems, usually in a non-production environment. They ensure that the subsystem behaves according to predefined expectations through assertion statements (e.g., assertTrue, assertFalse, assertEqual). Integration tests verify that the composition of several subsystems work together according to predefined expectations, also in a non-production environment.
When deploying production software systems, data inconsistencies can get introduced into a database from bad user input or buggy code. Data inconsistencies aren’t usually tested in unit and integration testing, because most developers assume clean data in non-production environment. It’s also hard to imagine all sorts of data inconsistencies that can arise. To ensure a software system is running correctly in production, live data audits should run on a production server. Live data audits verify expectations of data to ensure data integrity. Example live data audits are internal-facing dashboard web pages and batch reports on a production server.
Another approach to diagnose problems on production server systems is to ensure debuggability. Unlike a local environment, production servers cannot be interrupted during execution with breakpoints and step-through code. To figure out what the code is thinking, developers have to write debugging statements (e.g., print statements, log commands) that is captured in log files. These log files are made accessible to developers to trace unexpected branches of execution.
In conclusion, building software systems requires more than unit and integration testing to ensure code quality. Live data audits and debuggability are just as important to ensure code is working.
I have different strategies for backing up source code and supporting resources (e.g., PSDs, photos, and documents) on a daily basis. Backing up source code is easy with Github and BitBucket, two cloud-based git repositories. Github works better for open-source projects since it doesn’t offer many private repositories in its free tier. BitBucket, on the other hand, allows a developer to create several private repositories for oneself.
Supporting resources should be backed up outside of git. Git is primarily designed to diff source code files and doesn’t store binary files efficiently. Cloud-based solutions from Dropbox and Google Drive are better suited for supporting resources. Dropbox is suitable for unshared resources. Google Drive is ideal when collaborating with others.
I also create archival backups on a periodic basis and for large files. I make archival backups on a periodic basis to take a snapshot of my work at a given point in time. Large files aren’t suitable for storing on cloud-based solutions because of slow Internet transfer rates and limited cloud-based storage in the free tier.
I make the archival backups using an external hard drive that I connect only when making backups. I format the external hard drive for both Windows and Mac using the ExFAT format. ExFAT format is built for high-capacity SD cards and works fine for large-file backups. ExFAT is patented by Microsoft and licensed by Mac. ExFAT is better than FAT/FAT32, which can’t store large files; Windows NTFS, which cannot be written by Mac (using native drivers only); and Mac HFS, which cannot be read by Windows (using native drivers only).
To create my archived copies, I use 7-Zip (Keka on Mac) to compress entire directories with git-cloned repositories, Dropbox, and Google Drive. I name the 7-Zip archives with the date when I’m making the archive. These 7-Zip archives are saved directly to the external hard drive. This way if I accidentally delete a file and isn’t found in the most recent archive, I can still find the file in an earlier archive.
By now you should be wondering why all the trouble. Other backup solutions already exist with less effort. For example, Mac has Time Machine. In my approach, I know what I have intentionally backed up and what I’ve excluded. Although cloud-based backup solutions are useful for daily backup, they are not archival copies. Uninvited users can gain entry to your account and delete data without your consent (e.g., when Dropbox disabled password authentication). Archives should be hard to access, which is sufficiently hard when stored on an external hard drive.