Lambert Labs Renew AWS Select Partner Status

lambert labs aws partner

Lambert Labs has attained AWS Select Partner status for the second year running. As part of a strict selection process, Lambert Labs satisfied criteria including having a minimum of 8 AWS accredited individuals in our team and generating $1,500 of new monthly recurring revenue for AWS throughout the course of the year.

To find about more about our cloud computing expertise, read about our AWS consulting services. You can also find out more about our AWS expertise on our AWS Partner Solutions Finder profile.

Lambert Labs named as one of Top B2B Firms for software development in Britain by Clutch

artifical intelligence
Lambert Labs Clutch

We’ve been recognised by Clutch as one the Top B2B Firms for software development in Britain!

Clutch is an independent B2B review and ratings platform. Various past and present clients of Lambert Labs have been kind enough to give us outstanding reviews during 2020 and we’re pleased that this has led to us receiving this award.

To find out more about our previous projects and reviews check our Clutch profile. And if you want to get in touch for help with your software development then get in contact with us.

Explaining FastAPI scopes


At Lambert Labs we are always interested in trying out new Python frameworks that are well built and managed. We recently started using a framework called FastAPI. FastAPI is a server-side Python framework used to build APIs for consumption by various different clients. As the name suggests, FastAPI is high-performance – it is regarded as one of the fastest Python frameworks available. In fact, according to tests run by, FastAPI framework outperforms other popular Python web frameworks including Django and Flask!

FastAPI ranks 8th on the ranking on tests done by

In this particular test, all the frameworks were put through their paces fetching data from a database that they have no prior knowledge of. They had to read, modify and sort the data they fetched as quickly as possible and the faster they do this, the higher performance of the framework.

What is FastAPI?

As stated above, FastAPI is a framework used to make API services, which will be consumed by users. It is written to be coded in Python 3.6+. According to FastAPI’s creator, the framework was designed to implement features that take advantage of Python 3.6+ based features (type hints, for example) and be detailed and easy to use to make the developer experience smooth. This is not to mention the performance, which as one can see from the test mentioned above, is excellent.

Explaining FastAPI

Many of the frameworks that _appear_ faster than FastAPI aren’t directly comparable because of the difference in the features they offer. Frameworks like Sanic and Starlette (FastAPI is based on Starlette) do not have the full data validation or JSON serialization features that FastAPI offers and coding them manually would introduce the same (or more) overhead to FastAPI. Therefore, when you narrow the comparison of FastAPI to other fleshed out web frameworks like Django and Flask, you can see why FastAPI is increasing in popularity.

Data validation

As alluded to above, one of the many great features of FastAPI is data validation. This is due to Pydantic; the second dependency that FastAPI is built on top of (in addition to Starlette). Pydantic enforces data types during the application’s run time. It ensures that when a consumer of an API endpoint sends data to the server but the consumer has incorrectly sent the wrong type of data, then the server can respond with helpful error messages, instead of attempting to map the data to the database and potentially causing an I/O operation failure.

Built on standards

FastAPI is robust because of the standards it adheres to and uses; namely OpenAPI and JSONSchema. OpenAPI is a widely adopted specification for defining a language-agnostic standard for creating APIs. This includes standards defining path operations, security dependencies, query parameters and more. Adopting a common standard that is widely known by other developers allows FastAPI applications to be more scalable and makes the development experience a more consistent one. 

JSONSchema validates JSON data and describes the appropriate format to use in endpoint requests and responses. Combined with OpenAPI, FastAPI leverages these standards to create automatic API documentation so that developers can consume the APIs in a web interface: Swagger UI or Redoc. Having Swagger UI or Redoc available in a developer’s toolbox is essential for performing quick sanity checks on a particular endpoint – it helps to replicate the frontend application experience.

Walkthrough: Authorization scopes

The best way to demonstrate FastAPI is walking through an implementation of commonly used features. I have chosen to take an advanced feature from the FastAPI documentation and one which we at Lambert Labs have recently adapted and are using in current projects. In particular, I am going to demonstrate how to add authorization ‘scopes’ to an endpoint in FastAPI.

Authorization scopes are specific, granular permissions which are given to users of an application to ensure that they aren’t given privileged access to certain features of the application that they shouldn’t have access to. They can also be part of role policies which a human or machine can assume temporarily depending on their use case. Either way, it helps to distinguish levels of access to ensure that the right people have the appropriate permissions at all times. Take Instagram as an example. There are a variety of different permissions that can be given to consumers of the API. For instance, there are scopes for getting or editing a user’s profile picture, but these scopes are kept separate so that users can only view other users’ profile pictures but can only edit their own.


Let’s walk through the example implementation mentioned in the FastAPI documentation.

from datetime import datetime, timedelta
from typing import List, Optional
from fastapi import Depends, FastAPI, HTTPException, Security, status
from import (OAuth2PasswordBearer,
from jose import JWTError, jwt
from passlib.context import CryptContext
from pydantic import BaseModel, ValidationError

# to get a string like this run:
# openssl rand -hex 32
SECRET_KEY = "09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7"

fake_users_db =
    {"username": "johndoe"
     "full_name": "John Doe"
     "email": "",
     "hashed_password": "$2b$12$EixZaYVK1fsbw1ZfbX3OXePaWxn96p36WQoeG6Lruj3vjPGga31lW",
    {"username": "alice",
     "full_name": "Alice Chains",
     "email": "",
     "hashed_password": "$2b$12$gSvqqUPvlXP2tfVFaWK1Be7DlH.PKZbv5H8KnzzVgXXbVxpva.pFm",

class Token(BaseModel):
    access_token: str
    token_type: str

class TokenData(BaseModel):
    username: Optional[str] = None
    scopes: List[str] = []

class User(BaseModel):
    username: str
    email: Optional[str] = None
    full_name: Optional[str] = None

class UserInDB(User):
    hashed_password: str

pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")

oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token",
				     scopes={"me": "Read information about the current user.",
					     "items": "Read items."},)

app = FastAPI()

def verify_password(plain_password, hashed_password):
    return pwd_context.verify(plain_password, hashed_password)

def get_password_hash(password):
    return pwd_context.hash(password)

def get_user(db, username: str):
    if username in db:
	user_dict = db[username]
	return UserInDB(**user_dict)

def authenticate_user(fake_db, username: str, password: str):
    user = get_user(fake_db,  username)
    if not user:
	return False
    if not verify_password(password,  user.hashed_password):
	return False
    return user

def create_access_token(data: dict, expires_delta: Optional[timedelta] = None):
    to_encode = data.copy()
    if expires_delta:
	expire = datetime.utcnow() + expires_delta
	expire = datetime.utcnow() + timedelta(minutes=15)
    to_encode.update({"exp": expire})
    encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
    return encoded_jwt

async def get_current_user(security_scopes: SecurityScopes, token: str = Depends(oauth2_scheme)):
    if security_scopes.scopes:
	authenticate_value = f'Bearer scope="{security_scopes.scope_str}"'
	authenticate_value = f"Bearer"

    credentials_exception = HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
					  detail="Could not validate credentials",
					  headers={"WWW-Authenticate": authenticate_value},)
        payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
	username: str = payload.get("sub")
	if username is None:
		raise credentials_exception
	token_scopes = payload.get("scopes", [])
	token_data = TokenData(scopes=token_scopes, username=username)
    except (JWTError, ValidationError):
	raise credentials_exception

    user = get_user(fake_users_db, username=token_data.username)
    if user is None:
        raise credentials_exception
    for scope in security_scopes.scopes:
	if scope not in token_data.scopes:
	    raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
			        detail="Not enough permissions",
				headers={"WWW-Authenticate": authenticate_value},)
    return user"/token", response_model=Token)
async def login_for_access_token(form_data: OAuth2PasswordRequestForm = Depends()):
    user = authenticate_user(fake_users_db, form_data.username, form_data.password)
    if not user:
	raise HTTPException(status_code=400,
		            detail="Incorrect username or password")
    access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
    access_token = create_access_token(data={"sub": user.username, "scopes": form_data.scopes},

    return {"access_token": access_token, "token_type": "bearer"}

async def read_own_items(current_user: User = Security(get_current_user, scopes=["items"])):
    return [{"item_id": "Foo", "owner": current_user.username}]

The first few lines of the code import dependencies used in the application. There is also a small example database called fake_users_db, which is just a dictionary implementation of a database used  for demonstration purposes. The class schemas Token, TokenData, User and UserInDB are Python classes that validate data when it is sent to the API via HTTP or when it is about to be returned to the consumer of the API via HTTP . Notice that `UserInDB` has the same attributes as the ‘column’ data in fake_users_db (i.e. username, hashed_password, etc). This class would be used to validate any request data that describes a user to check that it has the same attributes/data type as those in the user database. pwd_context is a helper for hashing and unhashing passwords used in the database. oauth2_scheme is a very simple security dependency. All oauth2_scheme does is that it checks that the Authorization header in a request contains a JWT token (explained more below). Everything else defined below is what might be considered the ‘actual’ application workflow.


Whilst the application code contains a lot of security ‘logic’, including but not limited to authorization scopes. It would not make sense to present authorization scopes in isolation, without any of the other security that should go along with it. Hence, whilst the focus of this blog is on Authorization scopes, one can see a typical FastAPI implementation of other security features as well. Refer to the flowchart below for a simplified view of how authentication fits into an API application.

As stated above, the purpose of the code is a full authentication workflow: checking a database against the credentials given to a user, assigning a temporary access token they use to consume endpoints, decoding and validating that token when a consumer of an endpoint submits it as part of a request. On top of that, it also has a basic implementation of the authorization scopes. All things considered, it is rather simple.

Granting access token

Let’s examine a user requesting a temporary access token and how that is handled:"/token", response_model=Token)
async def login_for_access_token(form_data: OAuth2PasswordRequestForm = Depends()):
    user = authenticate_user(fake_users_db, form_data.username, form_data.password)
    if not user:
	raise HTTPException(status_code=400,
		            detail="Incorrect username or password")
    access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
    access_token = create_access_token(data={"sub": user.username, "scopes": form_data.scopes},

    return {"access_token": access_token, "token_type": "bearer"}

From the first line, we can see that any request made to this endpoint must have a body which conforms to a special type – OAuth2PasswordRequestForm. This class is provided by FastAPI – to save you time writing your own. OAuth2PasswordRequestForm has commonly-used attributes such as ‘username’, ‘password’ and ‘scope’. After checking in the database that the user exists, an access token is created for the user. The access token consists of data describing the user, their access time limits and the scope permissions assigned to them that is encoded into a compact string-type object, which is the token. A popular encoded access token is JWT, which is a standard for encrypting JSON information used in authentication/authorization. In this example, the user themselves define which permission scope they wanted when they made the request for the access token. In production, this would be done in the database where the user’s role in the application would be recognised (i.e. user, admin, developer, etc).

Scopes in action

The interesting stuff happens when the user attempts to consume endpoints with the access token they just received.

async def read_own_items(current_user: User = Security(get_current_user, scopes=["items"])):
    return [{"item_id": "Foo", "owner": current_user.username}]

In a fully authenticated example, the end response from this endpoint is the user’s username and a key item_id with value Foo. In the path operation, one can see that there is a Security dependency on the path operation called get_current_user. This dependency is measured against the endpoint specific scope, items. The idea is that only users who are granted the items scope can consume this endpoint and get the desired response.

async def get_current_user(security_scopes: SecurityScopes, token: str = Depends(oauth2_scheme)):
    if security_scopes.scopes:
	authenticate_value = f'Bearer scope="{security_scopes.scope_str}"'
	authenticate_value = f"Bearer"

    credentials_exception = HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
					  detail="Could not validate credentials",
					  headers={"WWW-Authenticate": authenticate_value},)
        payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
	username: str = payload.get("sub")
	if username is None:
		raise credentials_exception
	token_scopes = payload.get("scopes", [])
	token_data = TokenData(scopes=token_scopes, username=username)
    except (JWTError, ValidationError):
	raise credentials_exception

    user = get_user(fake_users_db, username=token_data.username)
    if user is None:
        raise credentials_exception
    for scope in security_scopes.scopes:
	if scope not in token_data.scopes:
	    raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
			        detail="Not enough permissions",
				headers={"WWW-Authenticate": authenticate_value},)
    return user

Let’s examine the dependencies. The first dependency is SecurityScopes, which contains the endpoint’s required permission scopes. As a reminder, there’s only one scope for this endpoint: items. Essentially, this function decodes the user’s JWT back into an object containing the user’s name and the scopes that they were granted when they first received the access token. After decoding the JWT, the function does three checks on the decoded data: (1) check the user has a username; (2) check that this username exists in the database and (3) check that the required scopes are at least a subset of the scopes granted to the user (in our case, does the user have the permission scope ‘items’?). If (1) or (2), the endpoint returns an error response with status code 401 (Unauthorized) and if (3) should fail, the endpoint returns an error response with status code 403 (Forbidden).

If the checks do not throw any errors, the endpoint response data is returned and the request is a success. It is important to note that the scope items was unique to this endpoint and it is totally possible to define unique scopes or combinations of scopes for different endpoints. The only things that would require changing are the dependencies attached to the path operation function and for the user to change the scopes that they ask for when they sign in. As stated previously, in reality one would first verify a user’s role in a database and assign the scopes that they are permitted to have.

Want to know more about how Lambert Labs use FastAPI in production-grade applications? Check out our FastAPI page.

Lambert Labs becomes an AWS Select partner

We are pleased to announce that Lambert Labs is now an AWS Partner Network Select Consulting Partner. We have been using Amazon Web Services for over 3 years and now manage more than 15 AWS accounts for clients across a broad range of industries.

We have met stringent requirements in order to become AWS Select partners. Our AWS expertise includes holding more than 10 AWS Accreditations, made up of AWS Business Professional and AWS Technical Professional qualifications, and we have an ever-increasing number of AWS Associate Certification-holders among our ranks.

Our experience across the AWS offering is broad. It includes, but is not limited to:

  • Deploying web applications using AWS Elastic Beanstalk and AWS EC2
  • Managing databases/data warehouses with AWS RDS, Amazon DocumentDB, AWS DynamoDB and Amazon Redshift
  • Implementing serverless applications with AWS Lambda and AWS API Gateway
  • Using AWS CloudFormation to automate the creation and management of technology stacks on AWS
  • Using AWS Elastic Container Registry and Elastic Container Service to deploy and managed container based applications

You can view Lambert Labs’ profile on the AWS Partner Solutions Finder.

Lambert Labs selected to take part in the London & Partners Business Growth Programme

We are thrilled to announce that Lambert Labs has been chosen to take part in the London & Partners Business Growth Programme. During the programme we will have the opportunity to network (virtually!) with large corporates headquartered in London and will have access to business growth advisers that will help Lambert Labs continue its upwards trajectory.

business growth programme

There are five different work streams in the Business Growth Programme:

  • advancing business plans
  • prioritising and engaging audiences
  • raising funds and finance
  • accelerating sales
  • developing people strategies

We are particularly looking forward to focusing our time on the programme in the ‘prioritising and engaging audiences’ and ‘accelerating sales’ work streams.

Writing tests for Android apps using Python and Linux


If you’ve ever developed a mobile app, or any other piece of software, you have very likely encountered a bug at some point. Sometimes you get a bug in your code which stops the app from compiling. This sounds nasty, but in fact, the nastiest bugs are those which you can’t see straight away. You have to summon them by following a very specific series of steps in the app.

So how do you find these hidden bugs? Of course, you can look for them manually yourself (or ask somebody else to look for you). But it is very time-consuming, and will get more time-consuming as the app grows! So, to save time and effort, couldn’t we perhaps get a program to look for these bugs? Couldn’t we automate this process somehow?

Thanks to Appium, we can. Appium is a library designed specifically for testing apps. For example, let’s imagine you’re developing an app where a user can register for an account. Appium allows you to write a test script which will register a dummy user through the UI and check that all of the buttons and menus are working properly.

Appium works on Android, iOS and Windows apps. What’s more, Appium was designed to run with a variety of programming languages, including Python, Java, and Ruby, so you likely don’t have to learn a new language to use it.

This tutorial will show you how to set up automated mobile testing on Linux. We will be using Python to write a test for a free Android chess app. This tutorial was designed to work with Ubuntu 16.04+, but should also work with any similar Linux distributions such as Debian.

How does Appium work?

NOTE: If you just want to start testing, skip to the “Setting up the environment” section below.

Before diving in to some actual testing, let’s first try and understand what Appium does.

Linking Python to the Android UI

Let’s say we want Python code to control the UI of an Android app. What do we need to achieve this?

  1. We need two machines: one machine to run the Python code, and the other to run the Android app. For our purposes, we will run the Python code on our local machine, and the app will run on the Android Emulator provided by Android Studio.
  2. We need some sort of interface to program the app UI. To do this, Android provides us with the UI Automator API. This is a collection of Java classes designed specifically for interacting with and testing the UI of apps running on Android.
  3. Finally, we need some way to convert Python code into commands understood by the UI Automator API.

The last step is where Appium steps in.

The WebDriver Protocol

Appium is really just an extension of Selenium, which was designed for testing web browsers. Selenium uses something called the W3C WebDriver Protocol to communicate with browsers and “drive” browser actions.

How are browser actions “driven”, exactly? First, a server which is linked to a browser is set up which listens for HTTP requests – this is the WebDriver server. (E.g. Google provide the ChromeDriver server as the WebDriver server for Google Chrome). At a certain point, the server might receive a request to go to a specific URL:

POST /session/{session id}/url     # Request-Line for going to a URL

The body of the request would contain a URL, such as The server would understand this as saying: “Get the browser to go to”. The server would then route the request to the browser, and the browser would redirect to Finally, the server would respond with a status message.

Similarly, the request to find a particular element on the page is:

POST /session/{session id}/element

The WebDriver protocol contains a number of such commands. What Appium does is it implements a way of getting these commands to work for mobile. For instance, Appium has a UIAutomator2 Driver which (together with a server running on the device connected via the Android Debug Bridge) can drive the UI Automator API. This means that a request like finding an element on the page will eventually convert to the corresponding UI Automator method in Java.

In addition, Appium adds a whole collection of commands specifically designed for mobile interaction, such as installing an app on the device:

POST /wd/hub/session/{sessionId}/appium/device/install_app

or toggling wi-fi:

POST /wd/hub/session/{sessionId}/appium/device/toggle_wifi

All of these HTTP requests are generated from Python code with Appium’s Python client.

Tracing a test command from Python to Android

We are now in a position to trace a single test command from Python all the way to the app UI (see Fig. 1 for a diagram).

Let’s say we have our app open in our emulator, and we want to use Python to find the “Register” button. This element has an accessibility id ‘Register’ (in Android, the accessibility id is also known as the ‘content-desc’ attribute; more below!).

The Python command for this is:

el = uiautomator2driver.find_element_by_accessibility_id('Register')

The Appium Python client converts this to a HTTP request (the session-id is already generated before the test is run):

POST /wd/hub/session/{session-id}/element
...<rest of headers>...

   "strategy": "accessibility id",
   "selector": "Register",
   "context": "",
   "multiple": false

The Appium server will receive this request and route it to the UIAutomator2 Driver. This in turn converts the request to JSON and routes it to a UIAutomator2 Server running on the device, which will in turn convert the request to the UI Automator method:


The Android OS will internally process this command and, if it can find the element, will respond with the unique ID of the element. The ID then gets stored in el in the Python code, where we can perform further actions such as clicking on it (, getting its coordinates (el.location), and more.

Fig. 1 High-level architecture of Appium. A Python command is converted to an HTTP request and sent to the Appium server (1). The server routes the request to the desired driver (2). The driver generates a JSON object and sends it to a server on the device (3). The JSON object is eventually converted to a UI Automator API command (4).

Setting up the environment

NOTE: These steps were designed to work with Ubuntu 16.04+, but should also work with any similar Linux distribution.

Optional: Download the Java Development Kit (JDK)

NOTE: Since version 2.2, Android Studio has come bundled with OpenJDK, so installing JDK separately is not necessary. However, Oracle JDK is regarded as more stable than OpenJDK so we’ve included the installation instructions for it as an alternative.

  1. Download the latest Linux distribution of JDK 8 here:
  2. Extract the contents of the tar file into a folder of your choosing: tar zxvf jdk-8u221-linux-x64.tar.gz

1. Download and install Android Studio

  1. Download the latest version of Android Studio here:
  2. Extract the contents of the tar file into a folder of your choosing: tar zxvf android-studio-ide-191.5791312-linux.tar.gz
  3. Follow the instructions inside the extracted directory to install Android Studio on your machine.

2. Set the JAVA_HOME and ANDROID_HOME environment variables

In order for packages to know which JDK and Android SDK you are using, we need to specify the JAVA_HOME and ANDROID_HOME environment variables:

  1. In your bash profile .bashrc, add the following line: export JAVA_HOME=/path/to/JDK. If you’re using the JDK supplied with Android, the path will be something like JAVA_HOME=<Android Studio directory>/android-studio/jre. If you’re using the Oracle JDK distribution, it will be something like JAVA_HOME=<JDK directory>/jdk1.8.0_221.
  2. Add the following line as well: export ANDROID_HOME=<Android Studio directory>/Sdk where <Android Studio directory> is the path to your Android Studio directory.
  3. Finally, add the following line: export PATH=$PATH:$JAVA_HOME/bin:$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools.
  4. Now open up a terminal and type in java -version. You should see a status message showing the correct Java version.
  5. In the same terminal type in adb. You should see a help page for the Android Debug Bridge.

3. Load an Android emulator

  1. Now open up Android Studio and click on Configure > AVD Manager in the bottom right.
  2. Click on Create Virtual Device, and use the configuration to create a Pixel 2 emulator with Android 9.0.
  3. Load up the emulator by clicking on the “Play” button in the Actions column of your device (if you can’t see a device listed, make sure that, from the home screen, you are under Configure > AVD Manager).
  4. After the device has launched, go to your terminal and type in adb devices. You should see one device listed in the output. NOTE: If you see the device listed as ‘unauthorized’, try configuring an emulator with an Android 9.0 (Google APIs) image instead.

4. Install Appium

  1. Before installing Appium, we strongly recommend that you use a Node version manager such as nvm to avoid permission issues.
  2. In a terminal, run the command: npm install -g appium. NOTE: If you are not using a Node version manager, you will need to run sudo npm install -g appium --unsafe-perm=true.
  3. Install appium-doctor using npm install -g appium-doctor and type appium-doctor to ensure Appium has been properly installed.
  4. Finally, type appium into a terminal. If everything has been properly installed, you should see the Appium server load up.

5. Set up the Python environment

In order to write Python tests for mobile, we require four components:

  1. Python 3
  2. Selenium
  3. python-appium-client
  4. pytest

For this tutorial we will use Conda to manage Python environments, but you can use virtualenv as well.

  1. First, download and install Miniconda (for Python 3) if you haven’t done so already. If you have Miniconda or Anaconda already installed, update to the latest conda version by typing conda update conda into a terminal.
  2. Create a new conda environment.
  3. Inside the environment type into a terminal: conda install selenium pytest && pip install appium-python-client

You are now ready to write your first Appium test!

Your first Appium test

Let’s see Appium in action by writing a short test for an Android app. This test will start up an app on the emulator and perform a few actions near the start of the app.

NOTE: The code described below can be accessed here:

Writing the test

For our demo we’ll be using version 3.02 of Chess Free. Click on the link to download the APK file and, with the emulator running, drag the APK file onto the emulator to install it. (At this point, feel free to open the app and look around to see what you will be testing!)

Next, create an empty directory where you want to store your test code. Create a new directory inside called android_apps and paste the downloaded APK file into this directory.

Next, in the top-level project directory create a Python file called and fill it with the code below:

import os

import pytest
from appium import webdriver

ANDROID_APP_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'android_apps')

apk_files = [f for f in os.listdir(ANDROID_APP_DIR) if f.endswith('.apk')]
assert len(apk_files) == 1, 'App directory can only contain one app file.'
ANDROID_APP_PATH = os.path.join(ANDROID_APP_DIR, apk_files.pop(0))

def app_driver():
    driver = webdriver.Remote(
            'app': ANDROID_APP_PATH,
            # Chess Free V3.02
            'appPackage': '',
            'appActivity': '.ChessFreeActivity',
            'platformName': 'Android',
            'platformVersion': '9',
            'deviceName': 'Android Emulator',
            'automationName': 'UiAutomator2',

    yield driver

The crucial portion of this code is the pytest fixture. This is a piece of code which will run every time we start a test.

We first define a Remote WebDriver. This is just a WebDriver where we ourselves specify which server to send HTTP requests to. In this case, we are essentially specifying that we want the Appium server to act as our “browser driver”, using the command_executor parameter.

The desired_capabilities dictionary specifies some properties which we want our Appium server to have. For instance, 'automationName': 'UiAutomator2' specifies that we want the server to route our requests to the UiAutomator2 driver as we will be testing on an Android device.

The app key should hold the path to the APK file of the app that we want to test. appPackage refers to the app package name and is used to verify that the app contained in the APK file is the same app that we actually want to test. appActivity is used by Appium to know which “section” of the app should be opened up first. To find the values of appPackage and appActivity, open the app in the emulator at the activity you want to test and, in your terminal, open the ADB shell with adb shell and type in dumpsys window windows | grep -E 'mFocusedApp'.

Now, create a subdirectory called tests and create a file inside called with the following contents:

def test_open_app(app_driver):

    # Find/click element by resource id

    # Find/click element by UiSelector
    ui_selector = 'new UiSelector().textContains("OK")'

    # Appium interacts with the Android OS, not just the app
    resource_id = ''

    ui_selector = 'new UiSelector().textContains("I agree")'

    # Press "Back" key.
    # See for keycodes


This is what the definition of a Pytest looks like. The argument app_driver tells Pytest to use the app_driver fixture from The app_driver variable inside the definition points to the driver yielded from the fixture definition at yield driver. Note how the name of the test begins with test_; this prefix is used by pytest to identify what to run.

First, the test script sets an implicit wait time of 10 seconds on the driver. This means that, whenever the test looks to see if a condition is satisfied (e.g. if an element is present on the screen), the test will poll for the condition for a maximum of 10 seconds before raising an error. A more in-depth discussion of implicit vs. explicit wait times can be found here.

The rest of the test script contains a series of commands which perform different actions in the app. For example, app_driver.find_element_by_id('YesButton').click() finds an element with ID YesButton and clicks it. A complete list of such commands can be found here:

The UiAutomatorViewer

Say we want to click a particular button on the app. How is Appium supposed to know which button to select? In other words, how do we obtain the ID of the button?

The simplest way to obtain the element ID is by using the UiAutomatorViewer program included with Android Studio.

First, open the app in the emulator and make sure that the element you want to inspect is visible on the emulator screen. Next, open a terminal and enter uiautomatorviewer. You should see a new window open: this is the UiAutomatorViewer program.

Along the top, click on the button labelled “Device Screenshot with Compressed Hierarchy”. You will see a screenshot of the emulator screen. Now, click on the button you want to inspect on this screen and look in the ‘Node Detail’ table in the bottom right. Here we have all of the element information that the UiAutomator has access to.

Fig. 2 How to use the UiAutomatorViewer to get the text, resource-id, and content-desc of an element.

The three most useful node details are resource-id, content-desc and text. Fig. 2 shows you where to find these node details. The table below shows which Appium methods to use to look up an element based on each of these node details:

Node detailAppium method
textui_selector = 'new UiSelector().textContains(<text>)'

Running the test

First, make sure that Appium and your emulator are running. Next, open up a terminal in the top-level project directory, enter your virtual environment, and type in pytest. If everything has been set up correctly, you should see the Chess Free app load in the emulator, buttons being selected and, if you’re lucky, the test passing.

Congratulations! You now have all the tools required to write automated tests for Android apps.

Learn more about how we use Python to intensively test our applications before launch on our Python page.

George Lambert features on Startup Secrets Podcast


Our founder George Lambert recently featured on the Startup Secrets Podcast where he spoke to host Seb Francis and gave insights into our company’s journey and provided tips for aspiring entrepreneurs. In the podcast George discusses the trials and tribulations of starting and growing a business from scratch. You can listen to the episode here

The Startup Secrets Podcast is a podcast for entrepreneurs, set to inspire, educate and motivate new businesses to success. 

Startup Secrets works in association with Accounts Lab, a cloud accountancy firm who specialise in startups and growth businesses across all sectors. 

With episodes released weekly, you can subscribe on itunes or follow on Spotify to never miss an episode.

Pair Programming at Lambert Labs

Pair Programming at Lambert Labs

Gone are the days of ‘traditional’ programming practices, where teams of software developers might have restricted their communciation to once weekly meetings and worked towards yearly release dates; we are now accustomed to a far more iterative process: daily standups, TDD and continuous integration and/or deployment. This iterative process is of course part of the wider set of agile programming practices.

At Lambert Labs we take our agile programming practices seriously. As part of this we regularly use pair programming as a way to ensure that we are working efficiently and effectively as possible.

Pair Programming Workstation Setup

The workstation setup for pair programming is important. If two developers are around one workstation for significant periods of time, they might get uncomfortable. This is why we use ‘dual workstations’ for our pair programming. Our dual workstations are made up of two desks and four monitors, but only one computer – the two monitors on the desk with the computer are duplicated to the two monitors on the desk without the computer. This means that both developers can sit in comfort while pair programming!

Setting up our workstations requires HDMI splitters (we use something like this). Our current desk of choice is the IKEA Skarska Standing Desk.

The Good

Code exposure: pair programming helps developers of all levels of seniority get exposure to different parts of the codebase. This is really important because it helps developers understand the overall system that they are working on and also helps provide continuity when a developer with ownership of a section of the codebase is on holiday or otherwise engaged.

Code quality: if two developers are looking at the same piece of code during development then this is similar to having an extremely thorough review process (think how often a reviewer spends as long reviewing code as the developer spent writing it – very rarely!).

Focus: yes, pair programmers might have a chat and a giggle, but what they won’t do is things like check social media, check their emails and surf the net. In this case, working together brings more productivity.

Morale: working together is fun, and it boosts morale! The life of a developer can at times be unneccesarily quiet. Pair programming discourages deathly silences and encourages collaboration.

Fewer bugs: working as a pair means that fewer bugs creep into the code base. This reduces time spent debugging at a later stage.

The Bad

Lack of ownership: developers don’t always feel that they have ownership of a section of the codebase, and feel as though they are getting pulled in multiple directions.

The Ugly

Lack of synergy: if two developers don’t ‘click’ when they are working together then the relationship might not be productive – in some cases this is just human nature!

At Lambert Labs we find that the benefits of pair programming hugely outweigh the drawbacks. It improves our productivity and standards, and we will stick with it moving forwards 🙂

How to integrate Jira with GitHub

At Lambert Labs we work with a range of clients across different sectors. What can be awkward is that they often use completely different DevOps and project management tools. On the DevOps side, some clients use AWS while others prefer Google Cloud Platform or even PaaS providers such as Heroku. Some clients use CircleCI while others prefer to use Travis/Jenkins. From the project management perspective Jira and Confluence are very popular, but a selection of our smaller projects still make use of Trello.

As a software development agency it is important for us to be expert users of as many of these project management tools as possible because it enables us to work on a broader range of projects. We recently started working on a project with a client that uses GitHub for code hosting and Jira for issue tracking. Integrating Jira with GitHub makes it easier for our project managers and software engineers to keep track of the GitHub branches and pull requests that correspond to tickets in Jira. The benefits of integration are shown in the example Jira ticket below, where there are links to a GitHub branch and pull request in the development section (these appear automatically as part of the integration).

A Jira issue demonstrating a GitHub integration
A Jira issue demonstrating a GitHub integration



  • A Jira account with administration rights
  • A personal GitHub account, or a GitHub organisation account with administration rights

Setting up GitHub

First, navigate to your GitHub account settings page (if you navigate to your personal settings, this will eventually give Jira access to your personal repositories. If you navigate to your organisation settings, this will eventually give Jira access to your organisation’s repositories). Go to ‘Developer Settings -> OAuth Apps -> New OAuth App’. You will be presented with the following page:

GitHub register OAuth application
Page demonstrating how to GitHub register OAuth application

Fill in the following details and click on ‘Register Application’:

After clicking on ‘Register Application’ you will be taken through to a confirmation page giving you a Client ID an Client Secret. Make a note of these – you will need them in a moment.

Setting up Jira

Now navigate to your organisation’s dashboard on Jira and go to ‘Settings -> Applications -> DVCS Accounts -> ‘Link GitHub Account’. You will be prompted with the following popup:

Adding a GitHub account to Jira
Adding a GitHub account to Jira

Choose either GitHub or Github Enterprise as your host (depending on what is appropriate) and put either your GitHub username or GitHub organisation name as the ‘Team or User Account’ (again, depending on what is appropriate). Client ID and Client Secret should be self explanatory! After you click ‘Add’ you will be prompted by an authorisation page (you should authorise the app, and may be required to enter your password as part of this process). You will then be redirected to the DVCS accounts page in Jira, where you should now see your linked GitHub account.

The last step is to understand how to make sure your GitHub branches and pull requests appear in the corresponding Jira tickets. To ensure the link takes place, you must name your Git(Hub) branches as ‘<JiraProjectKey>-<Jira-Issue-Number>-normal-git-branch-name’. You can find your Jira Project Key (on a per project basis) by going to a Jira project and navigating to settings. You will see a details page similar to the following:

Jira project details page
Jira project details page

So, for the above Jira project, an example branch name would be ‘LMS-5-my-new-feature’. As soon as you follow this naming convention for your branches you will see branches and pull requests appearing in your Jira tickets. Nice!