Login to a website via Python – how to deal with CSRF?

I’m using Python 3 for a script that will monitor updates in a user’s profile on a webpage. The login to this site is protected by CSRF countermeasures, which is a good thing. However, I can’t get my script to login to this site.

  • My approach using mechanicalsoup:

    import mechanicalsoup

    browser = mechanicalsoup.Browser() login_page = browser.get(base_url) login_form = login_page.soup.select(“.form-signin”)[0]

    login_form.find(attrs={“name”: “username”})[‘value’] = ‘username’ login_form.find(attrs={“name”: “password”})[‘value’] = ‘password’

    page2 = browser.submit(login_form, login_url) print(str(page2.text))

  • My approach using robobrowser:

    import re from robobrowser import RoboBrowser

    browser = RoboBrowser(history=True) browser.open(base_url) form = browser.get_form(action=‘/login/’)

    form[“username”] = ‘username’ form[“password”] = ‘password’

    browser.submit_form(form) print(str(browser.select))

In both cases I end up with a HTTP status of 403 and a message saying CSRF verification failed. Request aborted.

  • Any ideas how to fix this?
  • The form in question has a hidden input containing a CSRF token. I guess mechanicalsoup and robobrowser will submit this input as well. Am I right? Or do I have to treat it specially?
  • I thought the session used by this two packages would handle everything like cookies and so on. Is there something I’ve missed?

Best answer

I got the robobrowser variant to work by setting the Referer header.

browser.session.headers['Referer'] = base_url

So the complete code that worked for me is the following:

import re
from robobrowser import RoboBrowser

browser = RoboBrowser(history=True)
browser.open(base_url)
form = browser.get_form(action='/login/')

form["username"] = 'username'
form["password"] = 'password'
browser.session.headers['Referer'] = base_url

browser.submit_form(form)
print(str(browser.select))