Category: Web Development

Using django-oauth-toolkit for Client credentials Oauth Flow

I've been wanting to secure my api - so unidentified and unathorized parties cannot view, update, create or delete data.
This api is internal to the company and will only be used by other services - in other words no end users.
Hence the delegation of authorization need not happen and the services will be authneticating directly with the api.

That is why the Oauth client credentials flow is used - it is for server to server communication. (As far as I know)

There is alot of conflicting information on Oauth but in the RFC6749 on Oauth 2 Client credentials is mentioned:

1.3.4.  Client Credentials

   The client credentials (or other forms of client authentication) can
   be used as an authorization grant when the authorization scope is
   limited to the protected resources under the control of the client,
   or to protected resources previously arranged with the authorization
   server.  Client credentials are used as an authorization grant
   typically when the client is acting on its own behalf (the client is
   also the resource owner) or is requesting access to protected
   resources based on an authorization previously arranged with the
   authorization server.

Nordic API's: Securing the API Stronghold book mentions:

Oauth: It’s for delegation, and delegation only

I agree except when the client is the resource owner in the client credentials instance.
In that case surely there is no delegation?

Should we use it

What is the advantage over a basic auth or token authentication method?

It seems to just be an added step for the client but the key is that the token expires. So if a bad actor gets our token it will not last long before it is of no use.
The client id and secret is the thing that is used to generate tokend for future calling of the api.

Difference between Resource Owner Password Based flow and client Credentials

Django-oauth-tollkit provides both and their example uses the resource owner password based flow.
In both cases the resource owner is the client - so there is no delegation.

So what is the difference?

I checked on stackoverflow, and it turns out I was wrong.

In the resource owner client based way, the resource owner (end user) trusts the client application enough to give it it's username and password.
We don't really want this.

Implementing Client Credentials flow

Since users are not going to use the API and only services/clients will, I want to disable the other authorization flows and disable registering of clients.

I will manage the clients and they will be the resource owners.

So if you follow the information in the django-oauth-toolkit and setting it up for client credentials that should help

Permissions are significantly different from Django Permissions

What I found out durinng testing is that OauthToolkit implements it's own seperate permissions. So if you were wanting to use django model permissions (add, change, view and delete), you don't be able to.

Wait...I spoke too fast.

You can allow this with:

permission_classes = [IsAuthenticatedOrTokenHasScope, DjangoModelPermission]

However that means that you actually have to test with scopes if you expect a client to use it with Oauth and not django auth.

This is the view to use ClientProtectedResourceView

Is there a speed gain when moving from Apache Mod PHP to Nginx PHP-FPM?

I had a chance to deploy one of my running websites on another virtual machine.
I wanted to improve performance as customers are paying for the product and wanted to give a faster experience.

On the old site I used Apache with PHP mod apache to run the site. On the new site I went with Nginx and PHP-FPM.

The Server Setups

Both websites use the Yii Framework on PHP with a MySQL database. There has been some performance tweaks on the Old Site. The new site I left everything standard.

Old Site:

  • 2GB RAM (free 222MB)
  • CPU(s): 2
  • Site shared - vhosts with a few other sites
  • HTTPS Enabled (letencrypt)
    * PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
  • Server hosted in Nederlands (Testing from South Africa)

New Site:

  • 2GB RAM (Free 1161MB)
  • CPU(s): 2
  • Site dedicated, not other site on server
  • No HTTPS
  • PRETTY_NAME="Ubuntu 18.04.4 LTS"
  • Server hosted in South Africa (Testing from South Africa)

Method

The method for the performance test is as follows.

  1. Enable response time logging in the access logs of both apache and nginx - I wrote a post on this with apache and there are docs online for nginx
  2. Browsing Test - I will browse as a non logged in and logged in user on both sites in isolation. The statistics of response times will be recorded from the user's perspective in the browser and from the log response times.
  3. WebPage Test - I will use Web Page Test to compare both sites for a few pages.
  4. Load Test - I will test concurrent load with locustio
  5. Sitespeed.io Test - Test using sitespeed.io open source sitespeed testing utility

This will not be a scientific comparison - purely anecdotal

Browsing Test

PageNginx + PHP-FPM (ms)Apache + ModPHP (ms)Difference
Home Page1380166020%
Contact Us1060131024%
About Us997128028%
Login (POST)14107550435%
Portfolio (Db intentensive)19206960263%
Calculator946131038%
Chart (TTFB)105348231%

From the chart above it is safe to say that without a shadow of a doubt, the new site is faster.

Naturally the server being much closer helps. Instead of 9354km the new server is about 50km away. The average latency on a ping is 187 ms to the old server and about 12ms to the new one.

WebPage Test

I tested both sites from south africa, here are the screenshots and relevant info below:

speed-test-nginx-php-fpm
Speed Test of the New Nginx PHP-FPM website
speed-test-apache-php
Speed Test of the Old Apache ModPHP website
WebPageTest MetricNginx + PHP-FPMApache + ModPHP
First Byte102895
Speed Index7691660
Document Complete Time38503353
Document Complete Requests3633
Fully Loaded Time47464087
Fully Loaded Requests4846

Surprisingly the new website performed worse (in total). It was faster to first byte but full load was worse. Furthermore no caching and webpagetest does not like that.

WebPageTest MetricNginx + PHP-FPMApache + ModPHP
First Byte126913
Speed Index8001681
Document Complete Time68252989
Document Complete Requests1816
Fully Loaded Time68693215
Fully Loaded Requests1917

The results of this were also pretty annoying. It seems that webpagetest wants me to cache static content, gzip assets and use a CDN. Then it will be happy.

Let me add gzip and static caching to nginx and see.
Just uncomment the gzip section in the default nginx.conf.

After adding updaing it is looking a bit better:

add-gzip-and-static-caching-nginx
After updating the new site enabling gzip compression and browser caching

I then removed the twitter feed and things were better:

Old Site:

New Site:

all-a-web-page-test

Load Test

I created a test to make some GET requests against the server - while not logged in. The test has users spam at 1 a second.

The new site performed as follows

number-of-users-nginx-php-fpm
N umber of users nginx php-fpm
response-times-(ms)-php-fpm-nginx
Response times (ms) php-fpm nginx
total-requests-per-second-nginx-php-fpm
Total requests per second nginx php-fpm

So it can run stably from 80 to 100 RPS.

The old site performed terribly. When I got up to 2 RPS all the other sites monitoring was saying it was down. It was weird that the RPS didn't grow according to users as fast with the old site - perhaps locust knows it couldn't handle that spawn rate.

load-test-apapche-mod-php-total-requests

apache-mod-php-load-test-response-time

user-growth-apache-modphp-old-site-loadstest

Sitespeed.io

To do a more comprehensive test I employed sitespeed.io. I then ran the test against both sites are here are the results...

The Old Mod-PHP and Apache site

sitespeed-io-for-old-apache-mod-php-site

The New PHP-FPM and Nginx site

sitespeed-io-for-new-nginx-php-fpm-site

Update Adding HTTP/2

I was using the default nginx config that is HTTP/1.1, so I updated it to serve with HTTP/2

Now, I have switched over the performance site to be the current site.

how-to-trade-http2-speed-test

  • First bytes is a bit slower 0.192s vs 0.095s on the HTTP/1.1 version
  • Start render is also about 100ms faster at 0.400s vs 0.500s on the HTTP/2 site
  • Document complete and fully loaded is however much faster on HTTP/2 - even with the 100ms handicap. It is about 300ms faster - probably due to HTTP/2 multiplexing of asset acquisition.

Conclusion

Some tests were conclustive - others were still in the balance.
From a load testing and user initial response view - the new site clearly wins. The biggest gain comes more from concurrent users and handling load. Another significant bit was moving the server closer to the users.

The PHP-FPM with Nginx site can handle 40 or more times more load than the other site and has a faster response even with the 200ms handicap.

Next Steps

The next steps to take would be to look at how to maximise performance with nginx and php-fpm

Deploying a django website to a Ubuntu 16.04 server with python3.6, gunicorn, nginx and Mysql

Getting up and running on your local development setup and being able to build and see the changes you are making is one of the numerous reasons we like django.

The built in development webserver helps a lot in this regard.

As soon as we have to deploy the site on a server and make it public so that other people can see the project and progress the job is more do it yourself and can be hard at times. In this post, I will show you how I did it with Ubuntu 16.04, nginx, gunicorn and with a Mysql Database.

I tried to use an ansible script to build an idempotent setup but it became tedious and decided that the initial server configuration will be a snowflake but change deploys will be done with ansible.

ubutnu16.04-django-nginx-mysql-gunicorn

ubutnu16.04-django-nginx-mysql-gunicorn ubutnu16.04-django-nginx-mysql-gunicorn ubutnu16.04-django-nginx-mysql-gunicorn

 

There are a few topics and a flow of how to get the site deployed:

  1. Provisioning a server
  2. Configuring the server with requirements
  3. Settings
  4. Requirements
  5. Basic Django Setup
  6. Getting gunicorn to work
  7. Configure Nginx to Proxy Pass

Provisioning a Server

I want an ubuntu 16.04 server mainly because it has long term support and I am used to ubuntu. With your hosting client create the server and then ssh in with:

ssh user@123.345.566.789

The best thing to do now is take care of basic security and initial setup of the ubuntu 16.04 server

The most important thing is creating an ssh key and logging in with ssh and disabling password login.

Configuring the server with requirements

One key thing is that when installing python 3.6 on ubuntu 16.04 is that it is not part of that version and was released later.

Ensure to install the following before compiling (otherwise it wont work or pip will give you an ssl issue:

sudo apt install zlib1g-dev build-essential libssl-dev libffi-dev

You should build python3.6 from source and not use a ppa. This tutorial on installing python3.6 from source on ubuntu 16.04 is the one I used. Update: If you are intalling python 3.7 you will need to install libffi-dev.

To set python to point to python3.7:

sudo update-alternatives --install /usr/bin/python python /usr/local/bin/python3.7 1

Everything else can be installed with apt:

  • python-mysqldb
  • mysql-server
  • nginx
  • git
  • libmysqlclient-dev

Remember when installing mysql-server that you will set a root password. You can run mysql_secure_installation to ensure the server is secure.

The nginx version that installs will be 1.10.3 which is a tad old and if you want a recent version you can check linux packages installing nginx latest.

Settings

In your django project it is important to ensure that you split setting.py into a directory with files

  • settings/base.py
  • settings/local.py
  • settings/staging.py

base.py contains global generic settings then in local.py and staging.py you can import those settings with:

from . base import *
and then override and add settings you need.
Remember that when running a manage.py command you should specify the settings with:
./manage.py collectstatic --setting=config.settings.staging
Although setting an environment variable is better with:
export DJANGO_SETTINGS_MODULE=config.settings.staging
Another thing is typing settings in on your local machine will become tedious so better to default to your local settings in manage.py with:

if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")

Remember to ensure mail servers and external integrations are not production or live settings when they should not be

Requirements

A good idea is to split up your requirements the same way you did for settings.

  • requirements/base.txt
  • requirements/local.txt
  • requirements/production.txt

You can inherit from base with -r base.txt as the first line.

Basic Django Setup

Create the database and user

Now you can create a mysql schema and a mysql user for that database. See this tutorial on how to create a new user and grant permissions.

Create the Environment

python3.6 -m venv env

source env/bin/activate

Now ensure that the settings environment variable is set in the environment by adding the following to the end of env/bin/activate:

export DJANGO_SETTINGS_MODULE=config.settings.staging

deactivate and reactivate with source env/bin/activate

Importantly you can put the django setting above in post_activate only if you use virtualenvwrapper it does not work with the native python virtualenv module.

Setup the database and static files

./manage.py migrate

./manage.py collectstatic

./manage.py createsuperuser

Test the site works

With the ALLOWED_HOSTS = ['xxx', ]  and DATABASES updated in settings you can test the site with the development server with:

manage.py runserver 0.0.0.0:8000

You will need to enable the 8000 port first with sudo ufw allow 8000

You can test if the site works at: http://server_domain_or_IP:8000

Getting Gunicorn to Work

Great news. If everything has worked up till now we can now get gunicorn to work.

pip install gunicorn

Make sure to add the to your requirements/production.txt

Run gunicorn:


gunicorn --bind 0.0.0.0:8000 settings_module.wsgi

Now you can test again.

If everything is good we want this service to be managed now by the os, so that it starts automatically on a system start and can be monitored with logs. For that unforunately we need to create a systemd service.

vim /etc/systemd/system/gunicorn.service

Add the following:


[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=<server_user>
Group=www-data
WorkingDirectory=/var/www/<project_name>
ExecStart=/var/www/<project_name>/env/bin/gunicorn --access-logfile - --workers 3 --bind unix:/var/www/<project_name>/<project_name>.sock config.wsgi:application
EnvironmentFile=/var/www/<project_name>/.gunicorn_env

[Install]
WantedBy=multi-user.target

Important to note that an environment file is given .gunicorn_env

The contents of this file will contain all the environment variables needed so in our case just:

DJANGO_SETTINGS_MODULE=config.settings.staging

You now need to create and enable the service:

sudo systemctl start gunicorn

sudo systemctl enable gunicorn

Configure Nginx to Proxy Pass

The final step is serving the site through nginx

sudo vim /etc/nginx/sites-available/<project_name>

Add the following:


server {
    listen 80;
    server_name <server_domain_or_ip>;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /var/www/<folder_name>;
    }

    location / {
        include proxy_params;
        proxy_pass http://unix:/var/www/<folder_name>/.sock;
    }

}

You then need to create a symlink to the enabled sites folder and remove the default site symlink there.

sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled

Check you nginx config with:

sudo nginx -t

then restart nginx

sudo service nginx restart

Open port 80 and delete the old 8000 port rule:

sudo ufw delete allow 8000

sudo ufw allow 'Nginx Full'

Done!

Conclusion

So it is not that difficult but it is difficult if you are trying to create a repeatable thing, which I hope one of you reading this will do. Another thing you can do is add whitenoise for simplified static file serving which I have not tried yet.

If you have any issues you can comment below or troubleshoot on the source of this article

Update: Using Pipenv

So pipenv is now the recommended way to manage both pip and your virtual environment so here are a few modifications to the commands:

Install pipenv:


sudo pip3 install pipenv

#Install dependencies
pipenv install

#Migrate
pipenv run ./manage.py migrate

# Remember to put environment variables in .bashrc

# Test running
pipenv run ./manage.py runserver 0.0.0.0:8000

pipenv install gunicorn

# Test gunicorn
pipenv run gunicorn --bind 0.0.0.0:8000 settings_module.wsgi

# Before changing the gunicorn config we need to find where gunicorn is
pipenv --venv

# Replace the gunicorn binary location with that in previous command

Important Note for CentOS 7

Disable SeLinux if you get this nginx error:


2019/06/20 15:29:42 [crit] 29877#0: *12 connect() to unix:/var/www/window/window.sock failed (13: Permission denied) while connecting to upstream, client: 10.200.1.249, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/var/www/window/window.sock:/", host: "10.200.0.115"

so turn off selinux with:


sudo setenforce 0