24. May 2021 9 min read

Deploy Python Bottle framework application to Kubernetes cluster

Recently I needed to write a simple application that would manipulate image and serve it back. All this would have to run from my Raspberry Pi Kubernetes cluster and Django seemed quite an overkill for that. I could even do it in PHP and spaghetti code, but I moved most of my projects to Python, so I looked for some simple alternative. I found that Bottle framework is exactly the fit, but I was sad to see that examples include Python spaghetti code, where HTML and computations are mixed together in same functions. However, same as PHP, Bottle framework has a simple templating language that I could use, so the decision was to continue working with my application.

I wrote a simple template and function and got it all running with Bottle internal server with Python call

    run(host='localhost', port=8080, debug=True)

Of course for development this is great, but I wanted to Dockerize it and use a proper server to serve the Python. I used gunicorn in Django projects, so this meant I already had a decent candidate. A bit more digging around the internet and I found that I need to provide a default bottle application to the gunicorn in my script. So in the core of my script I added bottle.default_app:

import bottle

@route('/')
def index():
    return template('index')

if __name__ == '__main__':
    main()

overlay = bottle.default_app()

This enabled me to run the Bottle python application from gunicorn on port 8800.

gunicorn --bind :8800 --log-level debug path.to.script.scriptname:overlay

Now we can basically start making a Docker container. So the Dockerfile for Bottle application would look quite simple and something like:

FROM python:3.7.7-slim-buster

# Create code directory and move into it
RUN mkdir /code

WORKDIR /code


# Copy enviornment requirements to container and install
# COPY requirements.txt ./
# RUN pip install --no-cache-dir -r requirements.txt && rm /root/.cache/pip -rf
# For this example lets just install bottle directly
RUN pip install --no-cache-dir bottle && rm /root/.cache/pip -rf

# Copy current directory into the container WORKDIR (/code)
COPY . .


# install Gunicorn as we dont want Bottle server in production
RUN pip install gunicorn==20.0.4 && rm /root/.cache/pip -rf

# Expose unique port for the gunicorn
EXPOSE 8800

# Run gunicorn with the wsgi name
CMD ["gunicorn", "--bind", ":8800", "path.to.script.scriptname:overlay"]

Now you simply build your container and run it with exposing the port

docker build --pull -t overlay .
docker run -p 8800:8800 overlay

Deploy Bottle web framework Docker container using Helm

Now the Bottle python web framework application is in docker container and we can use Helm chart to deploy it to our Kubernetes cluster. First lets create a helmchart directory and populate it. So we need a deployment file for Kubernetes which means helmchart/templates/deployment.yaml would have following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name }}
  labels:
    {{- include "helmchart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
{{- end }}
  selector:
    matchLabels:
      {{- include "helmchart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
    {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
    {{- end }}
      labels:
        {{- include "helmchart.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "helmchart.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: gunicorn
              containerPort: 8800
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: gunicorn
          readinessProbe:
            httpGet:
              path: /
              port: gunicorn
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          env:
            - name: DEBUG
              value: 'False'
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

The important part here is that the port named gunicorn is 8800 which matches the one inside the container where gunicorn running Bottle application is listening to. Then we need service, which will enable us to serve our application to external world, so helmchart/templates/service.yaml would have following content:

apiVersion: v1
kind: Service
metadata:
  name: {{ $.Release.Name }}-service
  namespace: {{ $.Release.Namespace }}
  labels:
    {{- include "helmchart.labels" . | nindent 4 }}
spec:
  ports:
    - port: {{ .Values.service.port }}
      targetPort: 8800
      protocol: TCP
      name: gunicorn
  selector:
    {{- include "helmchart.selectorLabels" . | nindent 4 }}

Now basically you should be able to access your application. I work with Traefik v2 and Certmanager, so that my application can be shared via https as well. So I added helmchart/templates/ingressroute.yaml with following content

{{- if .Values.ingressroute.enabled -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- range .Values.ingressroute.hosts }}
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
  name: {{ $.Release.Name }}route
  namespace: {{ $.Release.Namespace }}
spec:
  entryPoints:
    - {{ $.Values.ingressroute.entryPoint }}
  routes:
  - match: Host(`{{ .host }}`)
    kind: Rule
    services:
    - name: {{ $.Release.Name }}-service
      kind: Service
      port: {{ $.Values.service.port }}
  {{- if $.Values.ingressroute.tls.enabled }}
  tls:
    secretName: {{ $.Values.ingressroute.tls.secretName }}
  {{- end }}
{{- end }}
{{- end }}
{{- end }}

which creates Traefik IngressRoute object which listens to port of entryPoint where it matches towards host and calls service running on port. Because I enabled TLS (basically https) I also need helmchart/templates/certificate.yaml file which basically just has a normal Certificate object

{{- if .Values.ingressroute.tls.enabled -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: {{ .Release.Name }}-cert
  namespace: {{ .Release.Namespace }}
spec:
  {{- with (index .Values.ingressroute.hosts 0) }}
  commonName: {{ .host }}
  {{- end }}
  secretName: {{ .Values.ingressroute.tls.secretName }}
  dnsNames:
    {{- range .Values.ingressroute.hosts }}
    - {{ .host }}
    {{- end }}
  issuerRef:
    name: {{ .Values.ingressroute.tls.issuerRef }}
    kind: ClusterIssuer
{{- end }}
{{- end }}

I assume I do not need to paste the content of the values.yaml as most of the things that are referenced here should already by default be present inside the generated values.yaml, but if you are puzzled by some variable, just write me in the comments. There are also few articles I already written about this last bit of Traefik v2 and cert-manager integration.

Newest from this category: