Read time: 8.3 minutes (826 words)

Step 4: Test Driven Development Start

My plan for this project is to use a version of Test Driven Development known as Given When Then. Basically, this requires identifying the state of the test world before a test (the Given part), then identifying the action to be taken (the When part. Finally, the expected results are defined 9the When part). All of these steps are set up before any code is even written. The idea is to make sure you really know what you are about to create and how to make sure your creation works as desired.

Of course, this approach leaves open the entire issue of deciding what to add to your project at any point in the development process.

Features

A primary focus in software development is the defining of specific set of “features” to be added to the project. We do not just blast code hoping something useful will result. Good software development involves much more thinking than coding. We can identify specific components we know our project will need, or look at the project from the user’s point of view and start looking at actions the user should be able to take. The order of development is up to the development team. Some do “top-down” development, refining an application as they go. Others do a “bottom-up” approach where a bunch of component parts are created, then integrated into the project providing more and more functionality. The is no one “right’ approach.

Since I am the “team” at present, I will do a mix of both approaches. This diary will show the reasoning behind each step I take.

Versioning

It is common in software development to assign specific version numbers to the project at points in the development. These versions serve as markers defining when the interface has changed substantially. They mark some milestone in the development. The most common format for version numbers is semantic versioning:

  • Major - the user interface has changed substantially, breaking the old interface.

  • Minor - add functionality without changing old features

  • Patch - bug fixes that do not change functionality

Development Approach

As much as possible, I will work this way:

  • Identify a new feature I want the software to provide

  • Write one or more tests that will confirm that the new feature works as desired.

  • Write code that implements this new feature. Commits can be made as each test passes. Increment the Patch level when a test passes.

  • When the tests all pass, commit the new code and increment the Minor level

Deciding when enough functionality has been added to let the code go public will be the point in time where the Major level will be incremented.

First Code

Having worked through the development approach I will take, it is time to start real coding.

The application to be provided with this project will be named mmdesigner. It will be made available on PyPi so any Python developer can install it easily using pip. We will deal with publishing on PyPi later.

For now, we start off be creating the python package structure:

$ cd ~/_dev/math-magik
$ mkdir mmdesigner
$ touch mmdesigner/__init__.py

At this point, the package can be imported into other Python code. That is not very useful with no real code available yet, so where do we start?

Feature 1: Check Version

Our first feature will simply report the current project version number.a We will be using the Python Click package to manage the command line interface, so we do not need to write much code to get this part working.

Note

Add click to the requirements.txt file and make sure it is installed.

Here is the test code

tests/test_version.py
1import mmdesigner
2from mmdesigner import __version__
3
4
5def test_version():
6    """Return current application version string"""
7    mm = mmdesigner
8    assert mm.version() == __version__

Obviously, this is pretty simple, but it will work for now.

However, we are not done with this step. I also want the user to be able to check the version from the command line. That means we need to set up a basic application that takes a parameter asking for the version. By convention, this parameter will be –version.

Setting up the application involves adding two more files to the project.

mmdesigner/__main__.py
1from .cli import cli
2
3# python -m mmdesigner
4if __name__ == '__main__':
5    cli()
mmdesigner/cli.py
 1import os
 2import click
 3from mmdesigner import __version__
 4
 5
 6class Environment:
 7    """Context class holding state for cli commands"""
 8    def __init__(self):
 9        self.cwd = os.getcwd()
10        self.part_count = 0
11        self.assembly_count = 0
12        self.model_path = "tests/test_data"
13        self.model_name = "model"
14        self.debug = False
15
16pass_environment = click.make_pass_decorator(Environment, ensure=True)
17cmd_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), "commands"))
18
19class CLI(click.MultiCommand):
20    """Modular CLI class"""
21    def list_commands(self, ctx):
22        """scan command directory and list all cli commands"""
23        rv = []
24        for filename in os.listdir(cmd_folder):
25            if filename.endswith(".py") and filename.startswith("cmd_"):
26                rv.append(filename[4:-3])
27        rv.sort()
28        return rv
29
30    def get_command(self, ctx, name):
31        """import cli command file on demand"""
32        try:
33            mod = __import__(f"mmdesigner.commands.cmd_{name}", None, None, ["cli"])
34        except ImportError: # pragma: no cover
35            return
36        return mod.cli
37
38@click.version_option(__version__, "-v", "--version")
39
40@click.command(cls=CLI)
41@click.option("--debug", is_flag=True, help="Enable debug output")
42@click.option("--model_path", help="Path to model directory")
43@click.option("--model_name", help="Name of model within model_path")
44@pass_environment
45def cli(ctx, debug, model_path, model_name):
46    """primary CLI interface"""
47    ctx.debug = debug
48    if not model_path is None:
49        ctx.model_path = model_path
50    if not model_name is None:
51        ctx.model_name = model_name

Test Coverage

With some real code in place, it is time to set up test coverage, which checks your tests to make sure all of your code is covered by a test. There is an old saying among programmers:

If you have not tested it, it does not work!

Adding coverage is fairly simple. We add two new packages to our requirements.txt file:

pytest-cov
coverage
coveralls

Next, we create a pytest configuration file that will add the parameters needed to activate coverage reporting.

pytest.ini
 1[pytest]
 2testpaths =
 3    tests
 4norecursedirs =
 5    .git
 6    venv
 7    build
 8    dist
 9    docs
10    rst
11    mmdesigner/mmdesigner.egg-info
12
13addopts =
14    -r a
15    -v
16    --cov
17    --cov-config .coveragerc
18    --cov-report term-missing
19

We also need a configuration file for the coverage tests:

.coveragerc
 1[run]
 2include = mmdesigner/*
 3omit = *test_*
 4branch = True
 5
 6[report]
 7exclude_lines =
 8# Have to re-enable the standard pragma
 9    pragma: no cover
10    if __name__ == .__main__.:
11    pass

Now when you run tests, you will see a report on the lines covered and missed. Many projects work to cover 100% of their code, but doing so can be difficult. A high percentage is a good indicator that testing is high quality.

We can generate a badge for the README file by using another free service: Coveralls

Generating the badge will be done by Travis-CI by adding an after-success block t the .travis.yml file. All we need to do to set this up is log in on Coveralls and set up the project, then tell GitHub to allow Coveralls to access our project.

Todo

Add full description on activating Continuous Integration testing and badge generation in the appendix