Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,18 @@ RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | b
WORKDIR /tcd
COPY . /tcd/

ARG TABBYCAT_USE_REDIS_CHANNELS_CACHE=
ENV TABBYCAT_USE_REDIS_CHANNELS_CACHE=${TABBYCAT_USE_REDIS_CHANNELS_CACHE}

RUN nvm install && nvm use

# Set git to use HTTPS (SSH is often blocked by firewalls)
RUN git config --global url."https://".insteadOf git://

# Install our node/python requirements
RUN pip install pipenv
RUN pipenv install --system --deploy
RUN pipenv install --system --deploy \
&& ./bin/pipenv-install-redis-channels-cache.sh --deploy
RUN npm ci

# Compile all the static files
Expand Down
11 changes: 8 additions & 3 deletions Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,13 @@ django-split-settings = "*"
django-statici18n = "*"
django-summernote = "*"
dj-cmd = "*"
django-redis = "*"
django-cors-headers = "*"
psycopg2-binary = "*"
asgiref = "*"
channels = "*"
channels-redis = "*"
channels-postgres = "*"
ipython = "==7.*"
munkres = "*"
redis = "*"
qrcode = "*"
html2text = "*"
defusedxml = "*"
Expand All @@ -42,6 +40,13 @@ drf-link-header-pagination = "*"
networkx = "*" # Avoid its dependencies (SciPy)
django-push-notifications = {extras = ["wp"], version = "*"}

# Installed only when TABBYCAT_USE_REDIS_CHANNELS_CACHE is set at build time; see
# bin/pipenv-install-redis-channels-cache.sh and Dockerfile / render-compile / post_compile.
[redis-channels-cache]
django-redis = "*"
channels-redis = "*"
redis = "*"

[dev-packages]
pre-commit = "*"
selenium = "==3.141.*"
Expand Down
2,760 changes: 1,533 additions & 1,227 deletions Pipfile.lock

Large diffs are not rendered by default.

4 changes: 1 addition & 3 deletions app.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,7 @@
"logo": "https://raw.githubusercontent.com/TabbycatDebate/tabbycat/develop/tabbycat/static/logo-48x48.png",
"addons": [
"papertrail",
"rediscloud:30",
"heroku-postgresql:essential-0",
"heroku-redis:mini"
"heroku-postgresql:essential-0"
],
"env": {
"DJANGO_SECRET_KEY": {
Expand Down
1 change: 1 addition & 0 deletions bin/docker-run-honcho.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ cd tabbycat

# Migrate (can't do it during build; no db connnection)
python ./manage.py migrate --no-input
python ./manage.py createcachetable --noinput

# Needed to ensure daphne works properly
rm -f /tmp/asgi.socket /tmp/asgi.socket.lock
Expand Down
13 changes: 13 additions & 0 deletions bin/pipenv-install-redis-channels-cache.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#!/usr/bin/env bash
# Install Pipfile category [redis-channels-cache] when TABBYCAT_USE_REDIS_CHANNELS_CACHE
# is set (same truthiness as tabbycat.settings.heroku._truthy_env).
set -eo pipefail

val="${TABBYCAT_USE_REDIS_CHANNELS_CACHE:-}"
lc=$(printf '%s' "$val" | tr '[:upper:]' '[:lower:]')
case "$lc" in
1|true|yes)
command -v pipenv >/dev/null 2>&1 || python -m pip install pipenv
pipenv install --system --categories=redis-channels-cache "$@"
;;
esac
7 changes: 7 additions & 0 deletions bin/post_compile
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,19 @@

set -eo pipefail

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
"${REPO_ROOT}/bin/pipenv-install-redis-channels-cache.sh" --deploy

echo "-----> I'm post-compile hook"
cd ./tabbycat/

echo "-----> Running database migration"
python manage.py migrate --noinput

echo "-----> Ensuring database cache table exists"
python manage.py createcachetable --noinput

echo "-----> Running dynamic preferences checks"
python manage.py checkpreferences

Expand Down
4 changes: 4 additions & 0 deletions bin/render-compile.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,17 @@ set -o errexit
echo "-----> Install dependencies"
python -m pip install pipenv
pipenv install --system
./bin/pipenv-install-redis-channels-cache.sh

echo "-----> I'm post-compile hook"
cd ./tabbycat/

echo "-----> Running database migration"
python manage.py migrate --noinput

echo "-----> Ensuring database cache table exists"
python manage.py createcachetable --noinput

echo "-----> Running dynamic preferences checks"
python manage.py checkpreferences

Expand Down
25 changes: 9 additions & 16 deletions deploy_heroku.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def get_git_push_spec():
exit(1)

# Create the app with addons
addons = ["papertrail", "heroku-postgresql:%s" % args.pg_plan, "rediscloud:30", "heroku-redis:mini"]
addons = ["papertrail", "heroku-postgresql:%s" % args.pg_plan]
command = ["heroku", "apps:create", "--stack", "heroku-22"]

if addons:
Expand Down Expand Up @@ -204,22 +204,15 @@ def get_git_push_spec():
else:
remote_name = heroku_url

# Wait for Redis provisioning, which can take a significant amount of time
redis_provisioned = False
redis_status_command = make_heroku_command(["redis:info"])
print_yellow("Waiting for Heroku Redis to provision (may take up to 5 minutes)...")

while not redis_provisioned:
time.sleep(30)
redis_output = subprocess.check_output(redis_status_command).decode().split("\n")
for status in redis_output:
match = re.match(r"^Status:\s+available", status)
if match:
redis_provisioned = True
break

# Wait for Postgres to be attachable before the first deploy push
print_yellow("Waiting for Heroku Postgres to be ready...")
try:
run_heroku_command(["pg:wait"])
except subprocess.CalledProcessError:
print_yellow("pg:wait was not successful; pausing briefly before deploy push...")
time.sleep(15)

print("Heroku Redis is available, starting deployment")
print("Continuing with deployment")

# Push source code to Heroku
push_spec = get_git_push_spec()
Expand Down
7 changes: 1 addition & 6 deletions docker-compose.prod.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,4 @@
# Docker-compose is a way to run multiple containers at once and connect them
# This sets up and runs postgres, redis, honcho and the django worker as services
# Reference: https://docs.docker.com/compose/compose-file/
# Note that this file extends and overrides what is specified in docker-compose.yml
# This file extends and overrides what is specified in docker-compose.yml

# Initial setup with
# $ docker compose -f docker-compose.yml -f docker-compose.prod.yml build
Expand All @@ -15,13 +12,11 @@ services:
web:
environment:
- DEBUG=0
- DOCKER_REDIS=1
- IN_DOCKER=1
- USING_NGINX=1

worker:
environment:
- DEBUG=0
- DOCKER_REDIS=1
- IN_DOCKER=1
- USING_NGINX=1
20 changes: 8 additions & 12 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,11 @@ services:
volumes:
- pgdata:/var/lib/postgresql

redis:
image: redis:6
volumes:
- redis_data:/data

web:
build: .
build:
context: .
args:
TABBYCAT_USE_REDIS_CHANNELS_CACHE: "${TABBYCAT_USE_REDIS_CHANNELS_CACHE:-}"
# Hack to wait until Postgres is up before running things
command:
[
Expand All @@ -36,21 +34,22 @@ services:
]
depends_on:
- db
- redis
expose:
- "8000"
environment:
- DEBUG=1
- IN_DOCKER=1
- DISABLE_SENTRY=1
- DOCKER_REDIS=1
- USING_NGINX=1
ports:
- "8000:8000"
working_dir: /tcd

worker:
build: .
build:
context: .
args:
TABBYCAT_USE_REDIS_CHANNELS_CACHE: "${TABBYCAT_USE_REDIS_CHANNELS_CACHE:-}"
# Hack to wait until migration is done before running things
command:
[
Expand All @@ -62,16 +61,13 @@ services:
]
depends_on:
- db
- redis
environment:
- DEBUG=1
- IN_DOCKER=1
- DISABLE_SENTRY=1
- DOCKER_REDIS=1
- USING_NGINX=1
working_dir: /tcd

volumes:
pgdata:
node_modules:
redis_data:
2 changes: 1 addition & 1 deletion docs/features/adjudicator-allocation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ To begin this process, click the *Allocate* button in the top-left. If you have

Once you click *Auto-Allocate Adjudicators* the modal should disappear and your panels should appear. At large tournaments, and in the later rounds, it is not unheard of for this process to take a minute or longer.

.. note:: If you are running a local installation and the allocator modal appears to hang on "Loading...", ensure that you have configured a :ref:`local Redis instance and are running a background worker <install-local>`.
.. note:: If you are running a local installation and the allocator modal appears to hang on "Loading...", ensure PostgreSQL is running, you have run migrations and ``createcachetable``, and a background worker is running (see :ref:`install-local`).

.. note:: You can re-run the automatic allocation process on top of an existing allocation. Thus it is worth tweaking your priorities or allocation settings if the allocation does not seem optimal to you. Also note that the allocation process is not deterministic — if you rerun it the panels will be different.

Expand Down
25 changes: 9 additions & 16 deletions docs/guide/scaling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,29 +103,23 @@ One way to help mitigate this is to try and load those pages first yourself to e

You can also increase the 1-minute timeout for the pages that are popular during the in-rounds, by going to the **Settings** section of your Heroku dashboard, clicking *Reveal Config Vars*, and creating a new key/value of ``PUBLIC_FAST_CACHE_TIMEOUT`` and ``180`` (to set the timeout to be 3 minutes i.e. 180 seconds). This should only be necessary as a last resort. Turning off public pages is also an option.

If you ever need to clear the cache (say to force the site to quickly show an update to the speaker tab) you can install `Heroku's Command Line Interface <https://devcenter.heroku.com/articles/heroku-cli>`_ and run the following command, replacing ``YOUR_APP`` with your site's name in the Heroku dashboard::

$ echo "FLUSHALL\r\n QUIT" | heroku redis:cli -a YOUR_APP --confirm YOUR_APP
If you ever need to clear the cache (say to force the site to quickly show an update to the speaker tab), the default setup stores cache rows in PostgreSQL. You can delete them from the ``tabbycat_cache`` table (for example using ``heroku pg:psql`` and ``DELETE FROM tabbycat_cache;``), or temporarily lower cache timeouts via config vars such as ``PUBLIC_FAST_CACHE_TIMEOUT``. If you use the optional Redis cache instead, you can flush it with ``heroku redis:cli``.

Postgres Limits
===============

The free tier of the Postgres database services has a limit of 20 'connections'. It is rare that a Tabbycat site will exceed this limit; most Australs-sized tournaments will see a maximum of 12 connections at any point in time.
The free tier of the Postgres database services has a limit of 20 'connections'. It is rare that a Tabbycat site will exceed this limit; most Australs-sized tournaments will see a maximum of 12 connections at any point in time. Using PostgreSQL for channels and caching adds some concurrent connections (via ``psycopg`` pools); if you approach the limit, upgrade the database plan or switch channels/cache back to Redis with ``TABBYCAT_USE_REDIS_CHANNELS_CACHE=1``.

.. image:: images/connections.png

You can monitor this in your Heroku Dashboard by going to the **Resources** tab and clicking on the purple Postgres link. The **Connections** graph here will show you how close you are to the limit. The first tier up from the 'free' Hobby tiers (i.e. ``Standard-0``) has a connection limit of 120 which can be used to overcome these limits if you do encounter them.

Redis Limits
============

Tabbycat uses two types of Redis add-on. The official Heroku Redis add-on is used to enable the pages of Tabbycat that display live information, such as the check-ins page, the adjudicator allocation page, and the round results page. The Redis Labs Heroku add-on is used to enable the caching of pages, as described above.

Both types of add-on have connection limits that, if hit, will degrade performance. However, in practice these connection limits are very rarely hit because connections are maintained extremely briefly, or only for very particular types of traffic. As with Postgres, you can click-through to each add-on to examine how close your site is to hitting this connection limit.
Channels and cache on PostgreSQL
=================================

The default Redis Labs add-on has a connection limit of 30. This should be sufficient for almost all tournaments — only at WUDC-levels of traffic have we seen that limit breached (to a peak of 118). Upgrading the Redis Labs add-on to the first non-free tier expands the connection limit to 256. This upgrade should only be strictly required for WUDC, but is also a good precaution for EUDC/Australs scale tournaments.
By default, live pages and public page caching use the same Heroku Postgres database as the rest of Tabbycat (`channels_postgres <https://github.com/danidee10/channels_postgres>`_ and Django's database cache). You do not need Redis add-ons for new deployments.

The official Heroku Redis has a connection limit of 20. Even at WUDC's scale the most connections ever observed were 13, so an upgrade should not be necessary.
If you prefer the previous behaviour, set the config var ``TABBYCAT_USE_REDIS_CHANNELS_CACHE`` to ``1``, provision Redis, and add the ``django-redis``, ``channels-redis``, and ``redis`` packages to your environment.

Mirror Admin Sites
==================
Expand All @@ -134,11 +128,11 @@ If you *really* want to be safe, or are unable to resolve traffic issues and una

.. warning:: This requires some technical knowledge to setup and hasn't been rigorously tested. It works fine in our experience but we haven't tested it extensively. If using this make sure you backup (and now how to restore backups) before setting one up.

To do so you would deploy a new copy of Tabbycat on Heroku as you normally would. Once the site has been setup, go to it in the Heroku Dashboard, click through to the **Resources** tab and remove the Postgres and Redis Add-ons. Using the `Heroku Command Line Interface <https://devcenter.heroku.com/articles/heroku-cli>`_ run this command, substituting ``YOUR_APP`` with your *primary* tab site's name (i.e. the app that you had initially setup before this)::
To do so you would deploy a new copy of Tabbycat on Heroku as you normally would. Once the site has been setup, go to it in the Heroku Dashboard, click through to the **Resources** tab and remove the Postgres add-on (and any Redis add-ons, if present). Using the `Heroku Command Line Interface <https://devcenter.heroku.com/articles/heroku-cli>`_ run this command, substituting ``YOUR_APP`` with your *primary* tab site's name (i.e. the app that you had initially setup before this)::

$ heroku config --app YOUR_APP

Here, make a copy of the ``DATABASE_URL`` and ``REDIS_URL`` values. They should look like ``postgres://`` or ``redis://`` followed by a long set of numbers and characters. Once you have those, go to the *Settings* tab of the Heroku dashboard for your *mirror* tab site. Click **Reveal Config Vars**. There should be no set ``DATABASE_URL`` or ``REDIS_URL`` values here — if there are check you are on the right app and that the add-ons were removed as instructed earlier. If they are not set, then add in those values, with ``DATABASE_URL`` on the left, and that Postgres URL from earlier on the right. Do the same for ``REDIS_URL`` and the Redis URL. Then restart the app using the link under **More** in the top right.
Copy the ``DATABASE_URL`` value. Go to the *Settings* tab of the Heroku dashboard for your *mirror* tab site, click **Reveal Config Vars**, and set ``DATABASE_URL`` to match the primary site. If you use optional Redis for channels or cache, also copy ``REDIS_URL`` (and related vars) from the primary app. Then restart the app using the link under **More** in the top right.

Once you visit the mirror site it should be setup just like the original one, with changes made to one site also affecting the other as if they were just a single site.

Expand All @@ -151,8 +145,7 @@ As a quick and rough benchmark, here is a list of typical prices you would encou
- A tournament of this size will require an upgraded database tier for the time when you are adding new data; i.e. during registration and rounds. Once the tab is released (and no further data changes needed) however you can downgrade it back to the ``Hobby Dev`` tier.
- 1x ``Hobby Dyno`` ($7/month each) run all day for 7 days = ~$2
- As recommended, 1 hobby dyno should be run as a baseline in order to see the metrics dashboard; but this can be downgraded a day or so after the tab has been released and traffic is sparse.
- 1X ``Redis Labs 100mb Plan`` ($10/month) run for 7 days = ~$2
- The upgraded version of Redis is worth running as a precaution while the site is showing draws and the full tab
- Optional Redis for channels/cache (if not using the default PostgreSQL setup) — similar monthly cost to managed Redis add-ons if you choose that path
- 3x ``Standard 1X Dyno`` ($25/month each) run 10 hours a day for 4 days = ~$4
- This higher quantity of dynos should only be necessary during traffic spikes (i.e. draw releases, immediately after round advances, and tab release) but unless you want to be constantly turning things on/off its usually easier just to upgrade them at the start of each day of in-rounds (or when the tab is published) and downgrade them at the end of each day. As mentioned earlier, you should occasionally check the *Dyno Load* in the Metrics area and adjust the number of dynos as needed.
- ``Autoscaled Performance M Dynos`` ($250/month each) average of 5 run for 1 hour = ~$2
Expand Down
25 changes: 9 additions & 16 deletions docs/install/linux.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Short version
::

curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash - # add Node.js source repository
sudo apt install python3.11 python3-distutils pipenv postgresql libpq-dev nodejs gcc g++ make redis-server
sudo apt install python3.11 python3-distutils pipenv postgresql libpq-dev nodejs gcc g++ make
git clone https://github.com/TabbycatDebate/tabbycat.git
cd tabbycat
git checkout master
Expand All @@ -45,6 +45,7 @@ That should open your Pipenv shell, then inside it run::

cd tabbycat
dj migrate
dj createcachetable
npm run build
dj collectstatic
dj createsuperuser
Expand Down Expand Up @@ -112,24 +113,15 @@ Some of the Python packages require GCC, G++ and Make in order to install::

$ sudo apt install gcc g++ make

1(e). Redis
-----------
*Redis is an in-memory data structure store, used as a message broker and cache.*
1(e). Channels, caching, and PostgreSQL
---------------------------------------
Real-time pages (adjudicator allocation, check-ins, round results) and public page caching use your existing PostgreSQL database via `channels_postgres <https://github.com/danidee10/channels_postgres>`_ and Django's database cache backend. You do **not** need Redis for a normal installation.

Tabbycat requires Redis for two critical functions:
After running migrations, create the cache table (this is also done automatically on Heroku, Render, and Docker)::

1. Asynchronous Background Tasks: Redis serves as a message broker for Django Channels, handling real-time features like live adjudicator allocation, check-ins updates, and round results display.
$ dj createcachetable

2. Page Caching: Redis caches frequently accessed public pages (draws, standings, results) to improve performance during high-traffic periods, especially during tournament events.

Install and start Redis using::

$ sudo apt install redis-server
$ sudo systemctl enable --now redis-server

After installation, Redis will automatically start and be configured to launch on system boot. You can verify it's running with::

$ sudo systemctl status redis-server
Advanced deployments can still use Redis by installing ``redis-server`` and the Python packages ``django-redis``, ``channels-redis``, and ``redis``, then setting ``TABBYCAT_USE_REDIS_CHANNELS_CACHE=1`` (Heroku/Render) or uncommenting the Redis blocks in **settings/local.py** (local installs). See **settings/heroku.py** for details.

.. _install-linux-source-code:

Expand Down Expand Up @@ -233,6 +225,7 @@ e. Navigate to the **tabbycat** sub-directory, initialize the database, compile

(tabbycat-9BkbSRuB) $ cd tabbycat
(tabbycat-9BkbSRuB) $ dj migrate
(tabbycat-9BkbSRuB) $ dj createcachetable
(tabbycat-9BkbSRuB) $ npm run build
(tabbycat-9BkbSRuB) $ dj collectstatic
(tabbycat-9BkbSRuB) $ dj createsuperuser
Expand Down
Loading
Loading