Category: django

Sending Emails Asynchronously with Django-Celery-Email and RabbitMQ

I am using django-registration-redux to register and activate users. It sends emails to activate and set password. Unfortunately when sending emails it waits for a successful email to send and blocks the main thread so requests takes quite a while. We want all email to send through celery and don’t want to have to create specific tasks so we make use of django-celery-email

Apparently you can use celery to asynchronously send the messages.

Celery requires a broker to send and receive messages, so you have a choice:

  • rabbitmq
  • redis
  • AmazonSQS

We will use rabbitmq. Installing this on ubuntu right off the bat looks looks tricky. You need erlang of a specific version and to app apt-repositories to get a latest version.

I’m going to keep my life simple and


sudo apt install rabbitmq-server

Apparently if you do this on ubuntu 16.04, rabbit-mq is already running in the background, but I didn’t see this message:


Starting rabbitmq-server: SUCCESS

Creating the Celery File

Create a celery.py file in your project root. If you are using a special settings/config location you can put it there next to wsgi.py.


import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')

app = Celery('mysite')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

This code is taken from SimpleIsBetterthanComplex RabbitMQ and Celery Setup. You then need to ensure that celery is imported when django starts in the projects __init__.py:


from .celery import app as celery_app

__all__ = ['celery_app']

Install Django Celery Email

pip install django-celery-email

This will install the django-celery-email package along with the celery dependencies.

Then add this to your settings file: settings/staging.py:


# Celery email sending

CELERY_BROKER_URL = 'amqp://localhost'

INSTALLED_APPS += (
    'djcelery_email',
    'django_celery_results'
)

CELERY_EMAIL_TASK_CONFIG = {
    'name': 'djcelery_email_send',
    'ignore_result': False,
}

CELERY_RESULT_BACKEND = 'django-db'

EMAIL_BACKEND = 'djcelery_email.backends.CeleryEmailBackend'
Migrate to create the message queue database requirements. I’ve also set the CELERY_RESULT_BACKEND so that you can view the result of an AsyncResult.
Migrate: ./manage.py migrate

Getting Things to Work

For the messaging queue system to work rabbitmq-server and celery need to be running.

rabbitmq-server usually creates a system d task and starts on boot up. Celery which was installed with pip will have to be daemonized (celery).

You can test it out by running it directly:

Where config is the project name

celery -A config worker -l info

On a production system with ubuntu 16.04 you can create a daemon:

Create a .celery_env file in the project root:


DJANGO_SETTINGS_MODULE=config.settings.staging

# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/var/www/django_project/env/bin/celery"

# App instance to use
# comment out this line if you don't use an app
CELERY_APP="config"

# How to call manage.py
CELERYD_MULTI="multi"

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
#   and is important when using the prefork pool to avoid race conditions.
CELERYD_LOG_FILE="/var/www/django_project/log/celery.log"
CELERYD_LOG_LEVEL="INFO"

Then create the system.d task:

sudo vim /etc/systemd/system/celery.service

With the following content:


[Unit]
Description=celery service
After=network.target

[Service]
Type=forking
User=staging
Group=www-data
EnvironmentFile=-/var/www/kid_hr/.celery_env
WorkingDirectory=/var/www/kid_hr
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target

Finally we need to ensure that the service starts after reboot:

sudo systemctl enable celery.service

Issues

If things aren’t working try sending an email from the django-shell then if you get this error:


Task is waiting for execution or unknown. Any task id that is not known is implied to be in the pending state

you need to ensure the celery service has started, also ensure that you have restarted the entire box.

Otherwise check the log files for any issues.

Sources:

 

Deploying a django website to a Ubuntu 16.04 server with python3.6, gunicorn, nginx and Mysql

Getting up and running on your local development setup and being able to build and see the changes you are making is one of the numerous reasons we like django.

The built in development webserver helps a lot in this regard.

As soon as we have to deploy the site on a server and make it public so that other people can see the project and progress the job is more do it yourself and can be hard at times. In this post, I will show you how I did it with Ubuntu 16.04, nginx, gunicorn and with a Mysql Database.

I tried to use an ansible script to build an idempotent setup but it became tedious and decided that the initial server configuration will be a snowflake but change deploys will be done with ansible.

ubutnu16.04-django-nginx-mysql-gunicorn

ubutnu16.04-django-nginx-mysql-gunicorn ubutnu16.04-django-nginx-mysql-gunicorn ubutnu16.04-django-nginx-mysql-gunicorn

 

There are a few topics and a flow of how to get the site deployed:

  1. Provisioning a server
  2. Configuring the server with requirements
  3. Settings
  4. Requirements
  5. Basic Django Setup
  6. Getting gunicorn to work
  7. Configure Nginx to Proxy Pass

Provisioning a Server

I want an ubuntu 16.04 server mainly because it has long term support and I am used to ubuntu. With your hosting client create the server and then ssh in with:

ssh user@123.345.566.789

The best thing to do now is take care of basic security and initial setup of the ubuntu 16.04 server

The most important thing is creating an ssh key and logging in with ssh and disabling password login.

Configuring the server with requirements

One key thing is that when installing python 3.6 on ubuntu 16.04 is that it is not part of that version and was released later.

You should build python3.6 from source and not use a ppa. This tutorial on installing python3.6 from source on ubuntu 16.04 is the one I used.

Everything else can be installed with apt:

  • python-mysqldb
  • mysql-server
  • nginx
  • git
  • libmysqlclient-dev

Remember when installing mysql-server that you will set a root password. You can run mysql_secure_installation to ensure the server is secure.

Settings

In your django project it is important to ensure that you split setting.py into a directory with files

  • settings/base.py
  • settings/local.py
  • settings/staging.py

base.py contains global generic settings then in local.py and staging.py you can import those settings with:

from . base import *
and then override and add settings you need.
Remember that when running a manage.py command you should specify the settings with:
./manage.py collectstatic --setting=config.settings.staging
Although setting an environment variable is better with:
export DJANGO_SETTINGS_MODULE=config.settings.staging
Another thing is typing settings in on your local machine will become tedious so better to default to your local settings in manage.py with:

if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")

Remember to ensure mail servers and external integrations are not production or live settings when they should not be

Requirements

A good idea is to split up your requirements the same way you did for settings.

  • requirements/base.txt
  • requirements/local.txt
  • requirements/production.txt

You can inherit from base with -r base.txt as the first line.

Basic Django Setup

Create the database and user

Now you can create a mysql schema and a mysql user for that database. See this tutorial on how to create a new user and grant permissions.

Create the Environment

python3.6 -m venv env

source env/bin/activate

Now ensure that the settings environment variable is set in the environment by adding the following to the end of env/bin/activate:

export DJANGO_SETTINGS_MODULE=config.settings.staging

deactivate and reactivate with source env/bin/activate

Importantly you can put the django setting above in post_activate only if you use virtualenvwrapper it does not work with the native python virtualenv module.

Setup the database and static files

./manage.py migrate

./manage.py collectstatic

./manage.py createsuperuser

Test the site works

With the ALLOWED_HOSTS = [‘xxx’, ]  and DATABASES updated in settings you can test the site with the development server with:

manage.py runserver 0.0.0.0:8000

You will need to enable the 8000 port first with sudo ufw allow 8000

You can test if the site works at: http://server_domain_or_IP:8000

Getting Gunicorn to Work

Great news. If everything has worked up till now we can now get gunicorn to work.

pip install gunicorn

Make sure to add the to your requirements/production.txt

Run gunicorn:


gunicorn --bind 0.0.0.0:8000 settings_module.wsgi

Now you can test again.

If everything is good we want this service to be managed now by the os, so that it starts automatically on a system start and can be monitored with logs. For that unforunately we need to create a systemd service.

vim /etc/systemd/system/gunicorn.service

Add the following:


[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=<server_user>
Group=www-data
WorkingDirectory=/var/www/<project_name>
ExecStart=/var/www/<project_name>/env/bin/gunicorn --access-logfile - --workers 3 --bind unix:/var/www/<project_name>/kid_hr.sock config.wsgi:application
EnvironmentFile=/var/www/<project_name>/.gunicorn_env

[Install]
WantedBy=multi-user.target

Important to note that an environment file is given .gunicorn_env

The contents of this file will contain all the environment variables needed so in our case just:

DJANGO_SETTINGS_MODULE=config.settings.staging

You now need to create and enable the service:

sudo systemctl start gunicorn

sudo systemctl enable gunicorn

Configure Nginx to Proxy Pass

The final step is serving the site through nginx

sudo vim /etc/nginx/sites-available/<project_name>

Add the following:


server {
    listen 80;
    server_name <server_domain_or_ip>;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /var/www/<folder_name>;
    }

    location / {
        include proxy_params;
        proxy_pass http://unix:/var/www/<folder_name>/kid_hr.sock;
    }

}

You then need to create a symlink to the enabled sites folder and remove the default site symlink there.

sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled

Check you nginx config with:

sudo nginx -t

then restart nginx

sudo service nginx restart

Open port 80 and delete the old 8000 port rule:

sudo ufw delete allow 8000

sudo ufw allow 'Nginx Full'

Done!

Conclusion

So it is not that difficult but it is difficult if you are trying to create a repeatable thing, which I hope one of you reading this will do. Another thing you can do is add whitenoise for simplified static file serving which I have not tried yet.

If you have any issues you can comment below or troubleshoot on the source of this article