Nov 13, 2013
Recently I have been diving into using signals with Django, which of course are pretty neat.
I am working on a website for work which in the most basicexplanation, is a task management site. Recently I have added in the ability to subscribe to tasks and get emails, I did this by connecting to the post_save signal. I only email out when a task is changed, not created (of course, no one would be subscribed to it). This worked flawlessly and "emails" out to anyone who is subscribed. I say that in quotes, because I haven't actually hooked it up to a real SMTP server, and only use
python -m smtpd -n -c DebuggingServer localhost:1025
which will output any emails to stdout. But I digress… A problem arose when I was working on ordering tasks.
I store an integer in the "ordering" column, which any authenticated user can drag the row to a new location and that will reorder the task. I did this after I setup the emailing signal, so I didn't think about an email being sent out for EVERY task being changed.
I tried a lot of different things, and was debating some that would be a bit messy. Among those ideas were trying to store the past values in another table, but that would get expensive fast. The reason I tried this was because I wanted to see if the ordering was the only thing that changed, and if that was the case, not send an email. I eventually found a thread on StackOverflow that says to use update on the queryset to not trigger the signal.
You can do this by doing something like this:
from app.models import ModelName
def reorder(request):
new_order = request.POST.get('new_order', None)
pk = request.POST.get('modelname_pk', None)
if new_order:
ModelName.objects.filter(pk=pk).update(ordering=new_order)
I am not sure if this is the proper way save changes and not trigger a post_save signal, but this is the way that worked for me so I figured I would document this.
Aug 06, 2013
My boss tasked me with getting the load time of 90 seconds(HOLY CARP!) on one page down. First thing I did was install the Django Debug Toolbar to see what was really happening.
There are currently 2,000 users in the database, the way our model is setup is that a UserProfile can have other UserProfiles attached to it in one of three M2M relations, which in the Django Admin would cause 2,000 queries PER M2M field. This is very expensive as obviously you don't want 10,000 queries that each take 0.3ms to take place.
The solution, after a day and a half of research is to override the formfield_for_manytomany method in the Admin class for our UserProfile object.
Our solution is to prefetch for any M2M that are related to the current Model.
def formfield_for_manytomany(self, db_field, request, **kwargs):
if db_field.__class__.__name__ == "ManyToManyField" and \
db_field.rel.to.__name__ == self.model.__name__:
kwargs['queryset'] = db_field.rel.to.objects.prefetch_related("user")
return super(UserProfileInline, self).formfield_for_manytomany(
db_field, request, **kwargs)
This goes inside our admin class UserProfileInline(admin.StackedInline). Simple clean and easy to drop into another ModelAdmin with minimal changes.
Other things I pondered was to set all our M2M's as raw_id_fields, then using Select2 or Chosen, query our UserProfiles when the related users were being selected. This would take a lot of load off the initial page load, but is more of a bandaid rather than a real fix.
I tried to override the Admin class's def queryset(self, request): but this was not affecting anything.
Jul 02, 2013
I have a friend who is interested in becoming a Python developer. He has some Python experience with CodeAcademy, but he of course wants to take this a step further and develop on his own computer. I figure I'd give him a few pointers, and I know this has been rehashed a million times, but what the hell, why not blog on it again.
There are a few important things to learn besides the actual language itself. The first I am going to discuss is has to deal with installing packages, then followed up closely with Python's path trickery. Finally I'm going to wrap up by discussing some software related to development, that could be used for any language, but I use daily in my work as a Python Software Engineer. Let's get started.
PIP
Python is a wonderful language, but how useful would it be if you had to rewrite everything by hand? Not useful at all. That's why the lovely pip developers were born. PIP (executable pip) is a package manager written for Python. It's very simple to use, and in my opinion is way better than easy_install. To use pip you need to know at a minimum three commands.
pip install
This command does exactly what it says on the box. It queries PyPI (Python Package Index) and downloads the latest version of the package on the server. It then installs it to your site-packages.
pip uninstall
This deletes all files associated with the package supplied. 100% simple.
pip freeze
This shows what packages are installed on your system and what versions. If you supply ‐‐local it will show what packages are installed in your current environment.
These three commands will get you started with package management, there are more commands you can find by looking through the help documents.
Virtualenv
If you notice I mentioned a current environment in my previous pip freeze explanation, here is why. Python has a default place that it looks when you reference a package. This is generally in something like /usr/lib/python2.7/site-packages/ or C:\Python27\lib. There is a set of scripts called virtualenv that creates an environment where you run it with a complete copy of your Python executable, and a blank (unless you copy them over) site-packages directory. You can then install any packages there activate the virtual environment. When activated you use those specific versions, no matter the version of what is installed on your system.
Let's show an example of the first time use of virtualenv:
$ sudo pip install virtualenv # Only time you might need sudo, try without first.
$ virtualenv myenv # Create the virtual environment
$ source myenv/bin/activate # Activate the virtual environment
(myenv)$ python -c "import MYPACKAGE; print MYPACKAGE"
Notice how it says your package is not in /usr/lib/python2.7/site-packages/ ? That's because you're using virtualenv to tell your copied python to use that library instead. There are many reasons you would want to use a virtual environment. The most frequent reason is to preserve version numbers of installed packages between a production and a development environment. Another reason virtualenv is useful if you do not have the power to install packages on your system, you can create a virtual environment and install them there.
Virtualenvwrapper
After you create a virtual environment, you just run``source bin/activate`` and it will activate the virtual environment. This can get tedious knowing exactly where your virtual environments are all the time, so some developers wrote some awesome scripts to fix that problem. This is called``virtualenvwrapper`` and once you use it once, you will always want to use it more. What it does is that it has you create a hidden directory in your home directory, set that to an environment variable and references that directory as the basis for your virtual environments. The installation of this is pretty easy, you can``pip install virtualenvwrapper`` if you want, or download the package and compile by hand.
Once installed correctly, you can run the command mkvirtualenv envname to create a virtual environment. You can then run``workon envname`` from anywhere, and it will activate that environment. For example, you could be at``/var/www/vhosts/www.mysite.com/django/`` and run``workon envname`` and it would activate the environment from there. This isn't a required package (none of them are really…) as I went a couple years without using``virtualenvwrapper``, but it is very useful and now I use it every day. Some tips I use with my setup of``virtualenvwrapper`` is that I use the postactivate scripts to automatically try to change into the proper project directory of my environment. This also means I usually name my``virtualenv`` after my project name for easy memory. It makes no sense to have a project called "cash_register" but the``virtualenv`` be called "fez". This is how I change to the right project after activating my virtualenv. This goes in $WORKON_HOME/postactivate
#!/bin/bash
# This hook is run after every virtualenv is activated.
# if its a work or a personal project (Example)
proj_name=$(echo $VIRTUAL_ENV|awk -F'/' '{print $NF}')
if [[ -e "/Users/tsouza/PersonalProjects/$proj_name" ]]
then
cd ~/PersonalProjects/$proj_name
else
cd ~/WorkProjects/$proj_name
fi
This about wraps up part one of this two part blog series. Next time I will discuss how to use Git and how to configure SublimeText2 and Aptana Studio for use with Python. Stay tuned!
Jan 05, 2012
For work I had to write a custom url model field. This model field when setting up accepts a default protocol, and a list of other protocols.
When checking the protocol, the url is split by "://". If the split has one or two parts, then the url is validly formed.
In the event of a single element split, there is no protocol specified. When there is no protocol, the url is prepended with the default protocol specified. If there is a protocol, it is checked to make sure it exists in a union of the default protocol and other protocols. If it is not, a ValidationError is raised letting the user know that the protocol is not accepted.
This can all be found at On my github [deadlink].
I have a couple ways I could have done this better and probably will. Improvements would be just one parameter called parameters in which it is checked if there is at least one element. Passing this, when there is no protocol specified, the first element is the default one.
This would be a little cleaner.
this example would allow for http, https, ssh, spdy and mailto, anything else would error out.
facebook_page = URLField(default_protocol="http", protocols=["https","ssh","spdy","mailto"])
The way I could improve this would be
facebook_page = URLField(protocols=["https","https","ssh","spdy","mailto"])
Dec 21, 2011
I was looking for a nice progress bar today at work to show progress rather than just printing "Waiting 30 seconds…" and having the script do nothing, I wanted to have a progress bar show.
I found a progress bar from Corey Goldberg
I did make a couple changes, and have uploaded my changes to my GitHub account.
newPythonProgressBar [deadlink]
To use this progressbar, it is very easy.
# To Setup
from progress_bar import ProgressBar
import sys
import time
def updateBar(step):
p.update_time(step)
sys.stdout.write("%s\r" % p)
sys.stdout.flush()
#
# to call
#
wait_time = 100 # seconds
p = ProgressBar(wait_time)
p.empty_char = "."
p.unit = "^"
for step in range(wait_time+1):
updateBar(step)
time.sleep(1)
It will look like this when you use it
[###...............7%..................] 7^/100^