Running an HKIS Server#

In this section we’ll see how to run an HKIS instance in production.

To just run one locally, see the Contributing section.

The way I use to push hackinscience to production is Ansible, you can take a look at my ansible playbooks, but all the details can be found below.

Running the website#

Running the website needs Python 3.8+.

Running the website needs a few non-Python dependencies:

  • postgresql (or SQLite or even MySQL)

  • redis (used by Celery to dispatch jobs)

  • nginx (for HTTPS decapsulation and to dispatch HTTP requests to Daphne or gunicorn)

Those can be installed on a Debian or Debian-based distrib using:

apt install redis postgresql nginx

Simple mode (asgi only)#

Hackinscience uses websockets, so it runs usng daphne.

In production you can start it like this:

daphne hackinscience_org.asgi:application

You can have daphne listen on a socket using -u /path/to/socket, naming sockets is more verbose than assigning port numbers.

From an nginx point of view, it looks like:

location / {
     proxy_pass http://unix:/path/to/daphne.sock;
     proxy_http_version 1.1;
     proxy_set_header Host $host;
     proxy_set_header X-Forwarded-Protocol $scheme;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection "Upgrade";
 }

Dual stack (wsgi+asgi)#

You can optionally run hackinscience in a dual stack mode, to load-balance between a daphne server for asynchronous requests and a gunicorn server for synchronous ones.

In this case:

  • All urls starting with /ws/ are to be routed to a daphne instance.

  • All other urls are to be routed to a gunicorn or uwsgi instance.

In the case of gunicorn it can be started like this:

gunicorn hackinscience_org.wsgi

From an nginx point of view it looks like:

location /ws {
    proxy_pass http://unix:/path/to/daphne.sock;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
}

location / {
    proxy_pass http://unix:/path/to/gunicorn.sock;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Protocol $scheme;
}

Running the correction server#

Correction bots are using Celery and Redis to communicate to the website, idea is you can have as many machine as needed to run correction. Starting with 2 is OK.

Correction bots are using firejail to run students code in a safer way, you have to install it:

apt install firejail

Correction machines don’t need much CPU, but don’t give them less than 2GB of RAM so we can spot problems (and reports them cleanly to the user) before OOM killer spots them (and just kills the script).

To run a correction bot in production you just need:

DJANGO_SETTINGS_MODULE=hackinscience_org.settings celery -A hkis.tasks worker

A good way to do it is to use systemd as is:

[Unit]
Description=Hkis Celery Service
After=network.target

[Service]
User=hkis-celery
Group=hkis-celery
WorkingDirectory=/path/to/hkis-website/clone/
Environment="DJANGO_SETTINGS_MODULE=hackinscience_org.settings"
ExecStart=/path/to/a/venv/bin/celery -A hkis.tasks worker
Restart=always

[Install]
WantedBy=multi-user.target

You can choose which Python intepreter used to run corrections by settings the BOT_PYTHON_INTERPRETER variable in Django settings. It allows to use a venv, which allows to install dependencies like correction-helper.