## Optimising Python dictionary access code

Question:

I've profiled my Python program to death, and there is one function that is slowing everything down. It uses Python dictionaries heavily, so I may not have used them in the best way. If I can't get it running faster, I will have to re-write it in C++, so is there anyone who can help me optimise it in Python?

I hope I've given the right sort of explanation, and that you can make some sense of my code! Thanks in advance for any help.

My code:

This is the offending function, profiled using line_profiler and kernprof. I'm running Python 2.7

I'm particularly puzzled by things like lines 363, 389 and 405, where an `if` statement with a comparison of two variables seems to take an inordinate amount of time.

I've considered using NumPy (as it does sparse matrices) but I don't think it's appropriate because: (1) I'm not indexing my matrix using integers (I'm using object instances); and (2) I'm not storing simple data types in the matrix (I'm storing tuples of a float and an object instance). But I'm willing to be persuaded about NumPy. If anyone knows about NumPy's sparse matrix performance vs. Python's hash tables, I'd be interested.

Sorry I haven't given a simple example that you can run, but this function is tied up in a much larger project and I couldn't work out how to set up a simple example to test it, without giving you half of my code base!

``````Timer unit: 3.33366e-10 s
File: routing_distances.py
Function: propagate_distances_node at line 328
Total time: 807.234 s

Line #   Hits         Time  Per Hit   % Time  Line Contents
328                                               @profile
329                                               def propagate_distances_node(self, node_a, cutoff_distance=200):
330
331                                                   # a makes sure its immediate neighbours are correctly in its distance table
332                                                   # because its immediate neighbours may change as binds/folding change
333    737753   3733642341   5060.8      0.2          for (node_b, neighbour_distance_b_a) in self.neighbours[node_a].iteritems():
334    512120   2077788924   4057.2      0.1              use_neighbour_link = False
335
336    512120   2465798454   4814.9      0.1              if(node_b not in self.node_distances[node_a]): # a doesn't know distance to b
337     15857     66075687   4167.0      0.0                  use_neighbour_link = True
338                                                       else: # a does know distance to b
339    496263   2390534838   4817.1      0.1                  (node_distance_b_a, next_node) = self.node_distances[node_a][node_b]
340    496263   2058112872   4147.2      0.1                  if(node_distance_b_a > neighbour_distance_b_a): # neighbour distance is shorter
341        81       331794   4096.2      0.0                      use_neighbour_link = True
342    496182   2665644192   5372.3      0.1                  elif((None == next_node) and (float('+inf') == neighbour_distance_b_a)): # direct route that has just broken
343        75       313623   4181.6      0.0                      use_neighbour_link = True
344
345    512120   1992514932   3890.7      0.1              if(use_neighbour_link):
346     16013     78149007   4880.3      0.0                  self.node_distances[node_a][node_b] = (neighbour_distance_b_a, None)
347     16013     83489949   5213.9      0.0                  self.nodes_changed.add(node_a)
348
349                                                           ## Affinity distances update
350     16013     86020794   5371.9      0.0                  if((node_a.type == Atom.BINDING_SITE) and (node_b.type == Atom.BINDING_SITE)):
351       164      3950487  24088.3      0.0                      self.add_affinityDistance(node_a, node_b, self.chemistry.affinity(node_a.data, node_b.data))
352
353                                                   # a sends its table to all its immediate neighbours
354    737753   3549685140   4811.5      0.1          for (node_b, neighbour_distance_b_a) in self.neighbours[node_a].iteritems():
355    512120   2129343210   4157.9      0.1              node_b_changed = False
356
357                                                       # b integrates a's distance table with its own
358    512120   2203821081   4303.3      0.1              node_b_chemical = node_b.chemical
359    512120   2409257898   4704.5      0.1              node_b_distances = node_b_chemical.node_distances[node_b]
360
361                                                       # For all b's routes (to c) that go to a first, update their distances
362  41756882 183992040153   4406.3      7.6              for node_c, (distance_b_c, node_after_b) in node_b_distances.iteritems(): # Think it's ok to modify items while iterating over them (just not insert/delete) (seems to work ok)
363  41244762 172425596985   4180.5      7.1                  if(node_after_b == node_a):
364
365  16673654  64255631616   3853.7      2.7                      try:
366  16673654  88781802534   5324.7      3.7                          distance_b_a_c = neighbour_distance_b_a + self.node_distances[node_a][node_c][0]
367    187083    929898684   4970.5      0.0                      except KeyError:
368    187083   1056787479   5648.8      0.0                          distance_b_a_c = float('+inf')
369
370  16673654  69374705256   4160.7      2.9                      if(distance_b_c != distance_b_a_c): # a's distance to c has changed
371    710083   3136751361   4417.4      0.1                          node_b_distances[node_c] = (distance_b_a_c, node_a)
372    710083   2848845276   4012.0      0.1                          node_b_changed = True
373
374                                                                   ## Affinity distances update
375    710083   3484577241   4907.3      0.1                          if((node_b.type == Atom.BINDING_SITE) and (node_c.type == Atom.BINDING_SITE)):
376     99592   1591029009  15975.5      0.1                              node_b_chemical.add_affinityDistance(node_b, node_c, self.chemistry.affinity(node_b.data, node_c.data))
377
378                                                               # If distance got longer, then ask b's neighbours to update
379                                                               ## TODO: document this!
380  16673654  70998570837   4258.1      2.9                      if(distance_b_a_c > distance_b_c):
381                                                                   #for (node, neighbour_distance) in node_b_chemical.neighbours[node_b].iteritems():
382   1702852   7413182064   4353.4      0.3                          for node in node_b_chemical.neighbours[node_b]:
383   1204903   5912053272   4906.7      0.2                              node.chemical.nodes_changed.add(node)
384
385                                                       # Look for routes from a to c that are quicker than ones b knows already
386  42076729 184216680432   4378.1      7.6              for node_c, (distance_a_c, node_after_a) in self.node_distances[node_a].iteritems():
387
388  41564609 171150289218   4117.7      7.1                  node_b_update = False
389  41564609 172040284089   4139.1      7.1                  if(node_c == node_b): # a-b path
390    512120   2040112548   3983.7      0.1                      pass
391  41052489 169406668962   4126.6      7.0                  elif(node_after_a == node_b): # a-b-a-b path
392  16251407  63918804600   3933.1      2.6                      pass
393  24801082 101577038778   4095.7      4.2                  elif(node_c in node_b_distances): # b can already get to c
394  24004846 103404357180   4307.6      4.3                      (distance_b_c, node_after_b) = node_b_distances[node_c]
395  24004846 102717271836   4279.0      4.2                      if(node_after_b != node_a): # b doesn't already go to a first
396   7518275  31858204500   4237.4      1.3                          distance_b_a_c = neighbour_distance_b_a + distance_a_c
397   7518275  33470022717   4451.8      1.4                          if(distance_b_a_c < distance_b_c): # quicker to go via a
398    225357    956440656   4244.1      0.0                              node_b_update = True
399                                                           else: # b can't already get to c
400    796236   3415455549   4289.5      0.1                      distance_b_a_c = neighbour_distance_b_a + distance_a_c
401    796236   3412145520   4285.3      0.1                      if(distance_b_a_c < cutoff_distance): # not too for to go
402    593352   2514800052   4238.3      0.1                          node_b_update = True
403
404                                                           ## Affinity distances update
405  41564609 164585250189   3959.7      6.8                  if node_b_update:
406    818709   3933555120   4804.6      0.2                      node_b_distances[node_c] = (distance_b_a_c, node_a)
407    818709   4151464335   5070.7      0.2                      if((node_b.type == Atom.BINDING_SITE) and (node_c.type == Atom.BINDING_SITE)):
408    104293   1704446289  16342.9      0.1                          node_b_chemical.add_affinityDistance(node_b, node_c, self.chemistry.affinity(node_b.data, node_c.data))
409    818709   3557529531   4345.3      0.1                      node_b_changed = True
410
411                                                       # If any of node b's rows have exceeded the cutoff distance, then remove them
412  42350234 197075504439   4653.5      8.1              for node_c, (distance_b_c, node_after_b) in node_b_distances.items(): # Can't use iteritems() here, as deleting from the dictionary
413  41838114 180297579789   4309.4      7.4                  if(distance_b_c > cutoff_distance):
414    206296    894881754   4337.9      0.0                      del node_b_distances[node_c]
415    206296    860508045   4171.2      0.0                      node_b_changed = True
416
417                                                               ## Affinity distances update
418    206296   4698692217  22776.5      0.2                      node_b_chemical.del_affinityDistance(node_b, node_c)
419
420                                                       # If we've modified node_b's distance table, tell its chemical to update accordingly
421    512120   2130466347   4160.1      0.1              if(node_b_changed):
422    217858   1201064454   5513.1      0.0                  node_b_chemical.nodes_changed.add(node_b)
423
424                                                   # Remove any neighbours that have infinite distance (have just unbound)
425                                                   ## TODO: not sure what difference it makes to do this here rather than above (after updating self.node_distances for neighbours)
426                                                   ##       but doing it above seems to break the walker's movement
427    737753   3830386968   5192.0      0.2          for (node_b, neighbour_distance_b_a) in self.neighbours[node_a].items(): # Can't use iteritems() here, as deleting from the dictionary
428    512120   2249770068   4393.1      0.1              if(neighbour_distance_b_a > cutoff_distance):
429       150       747747   4985.0      0.0                  del self.neighbours[node_a][node_b]
430
431                                                           ## Affinity distances update
432       150      2148813  14325.4      0.0                  self.del_affinityDistance(node_a, node_b)
``````

Explanation of my code:

This function maintains a sparse distance matrix representing the network distance (sum of edge weights on the shortest path) between nodes in a (very big) network. To work with the complete table and use the Floyd-Warshall algorithm would be very slow. (I tried this first, and it was orders of magnitude slower than the current version.) So my code uses a sparse matrix to represent a thresholded version of the full distance matrix (any paths with a distance greater than 200 units are ignored). The network topolgy changes over time, so this distance matrix needs updating over time. To do this, I am using a rough implementation of a distance-vector routing protocol: each node in the network knows the distance to each other node and the next node on the path. When a topology change happens, the node(s) associated with this change update their distance table(s) accordingly, and tell their immediate neighbours. The information spreads through the network by nodes sending their distance tables to their neighbours, who update their distance tables and spread them to their neighbours.

There is an object representing the distance matrix: `self.node_distances`. This is a dictionary mapping nodes to routing tables. A node is an object that I've defined. A routing table is a dictionary mapping nodes to tuples of (distance, next_node). Distance is the graph distance from node_a to node_b, and next_node is the neighbour of node_a that you must go to first, on the path between node_a and node_b. A next_node of None indicates that node_a and node_b are graph neighbours. For example, a sample of a distance matrix could be:

``````self.node_distances = { node_1 : { node_2 : (2.0, None),
node_3 : (5.7, node_2),
node_5 : (22.9, node_2) },
node_2 : { node_1 : (2.0, None),
node_3 : (3.7, None),
node_5 : (20.9, node_7)},
...etc...
``````

Because of topology changes, two nodes that were far apart (or not connected at all) can become close. When this happens, entries are added to this matrix. Because of the thresholding, two nodes can become too far apart to care about. When this happens, entries are deleted from this matrix.

The `self.neighbours` matrix is similar to `self.node_distances`, but contains information about the direct links (edges) in the network. `self.neighbours` is continually being modified externally to this function, by the chemical reaction. This is where the network topology changes come from.

The actual function that I'm having problems with: `propagate_distances_node()` performs one step of the distance-vector routing protocol. Given a node, `node_a`, the function makes sure that `node_a`'s neighbours are correctly in the distance matrix (topology changes). The function then sends `node_a`'s routing table to all of `node_a`'s immediate neighbours in the network. It integrates `node_a`'s routing table with each neighbour's own routing table.

In the rest of my program, the `propagate_distances_node()` function is called repeatedly, until the distance matrix converges. A set, `self.nodes_changed`, is maintained, of the nodes that have changed their routing table since they were last updated. On every iteration of my algorithm, a random subset of these nodes are chosen and `propagate_distances_node()` is called on them. This means the nodes spread their routing tables asynchronously and stochastically. This algorithm converges on the true distance matrix when the set `self.nodes_changed` becomes empty.

The "affinity distances" parts (`add_affinityDistance` and `del_affinityDistance`) are a cache of a (small) sub-matrix of the distance matrix, that is used by a different part of the program.

The reason I'm doing this is that I'm simulating computational analogues of chemicals participating in reactions, as part of my PhD. A "chemical" is a graph of "atoms" (nodes in the graph). Two chemicals binding together is simulated as their two graphs being joined by new edges. A chemical reaction happens (by a complicated process that isn't relevant here), changing the topology of the graph. But what happens in the reaction depends on how far apart the different atoms are that make up the chemicals. So for each atom in the simulation, I want to know which other atoms it is close to. A sparse, thresholded distance matrix is the most efficient way to store this information. Since the topology of the network changes as the reaction happens, I need to update the matrix. A distance-vector routing protocol is the fastest way I could come up with of doing this. I don't need a more compliacted routing protocol, because things like routing loops don't happen in my particular application (because of how my chemicals are structured). The reason I'm doing it stochastically is so that I can interleve the chemical reaction processes with the distance spreading, and simulate a chemical gradually changing shape over time as the reaction happens (rather than changing shape instantly).

The `self` in this function is an object representing a chemical. The nodes in `self.node_distances.keys()` are the atoms that make up the chemical. The nodes in `self.node_distances[node_x].keys()` are nodes from the chemical and potentially nodes from any chemicals that the chemical is bound to (and reacting with).

Update:

I tried replacing every instance of `node_x == node_y` with `node_x is node_y` (as per @Sven Marnach's comment), but it slowed things down! (I wasn't expecting that!) My original profile took 807.234s to run, but with this modification it increased to 895.895s. Sorry, I was doing the profiling wrong! I was using line_by_line, which (on my code) had far too much variance (that difference of ~90 seconds was all in the noise). When profiling it properly, `is` is detinitely faster than `==`. Using CProfile, my code with `==` took 34.394s, but with `is`, it took 33.535s (which I can confirm is out of the noise).

Update: Existing libraries

I'm unsure as to whether there will be an existing library that can do what I want, since my requirements are unusual: I need to compute the shortest-path lengths between all pairs of nodes in a weighted, undirected graph. I only care about path lengths that are lower than a threshold value. After computing the path lengths, I make a small change to the network topology (adding or removing an edge), and then I want to re-compute the path lengths. My graphs are huge compared to the threshold value (from a given node, most of the graph is further away than the threshold), and so the topology changes don't affect most of the shortest-path lengths. This is why I am using the routing algorithm: because this spreads topology-change information through the graph structure, so I can stop spreading it when it's gone further than the threshold. i.e., I don't need to re-compute all the paths each time. I can use the previous path information (from before the topology change) to speed up the calculation. This is why I think my algorithm will be faster than any library implementations of shortest-path algorithms. I've never seen routing algorithms used outside of actually routing packets through physical networks (but if anyone has, then I'd be interested).

NetworkX was suggested by @Thomas K. It has lots of algorithms for calculating shortest paths. It has an algorithm for computing the all-pairs shortest path lengths with a cutoff (which is what I want), but it only works on unweighted graphs (mine are weighted). Unfortunately, its algorithms for weighted graphs don't allow the use of a cutoff (which might make them slow for my graphs). And none of its algorithms appear to support the use of pre-calculated paths on a very similar network (i.e. the routing stuff).

igraph is another graph library that I know of, but looking at its documentation, I can't find anything about shortest-paths. But I might have missed it - its documentation doesn't seem very comprehensive.

NumPy might be possible, thanks to @9000's comment. I can store my sparse matrix in a NumPy array if I assign a unique integer to each instance of my nodes. I can then index a NumPy array with integers instead of node instances. I will also need two NumPy arrays: one for the distances and one for the "next_node" references. This might be faster than using Python dictionaries (I don't know yet).

Does anyone know of any other libraries that might be useful?

Update: Memory usage

I'm running Windows (XP), so here is some info about memory usage, from Process Explorer. The CPU usage is at 50% because I have a dual-core machine.

My program doesn't run out of RAM and start hitting the swap. You can see that from the numbers, and from the IO graph not having any activity. The spikes on the IO graph are where the program prints to the screen to say how it's doing.

However, my program does keep using up more and more RAM over time, which is probably not a good thing (but it's not using up much RAM overall, which is why I didn't notice the increase until now).

And the distance between the spikes on the IO graph increases over time. This is bad - my program prints to the screen every 100,000 iterations, so that means that each iteration is taking longer to execute as time goes on... I've confirmed this by doing a long run of my program and measuring the time between print statements (the time between each 10,000 iterations of the program). This should be constant, but as you can see from the graph, it increases linearly... so something's up there. (The noise on this graph is because my program uses lots of random numbers, so the time for each iteration varies.)

After my program's been running for a long time, the memory usage looks like this (so it's definitely not running out of RAM):

`node_after_b == node_a` will try to call `node_after_b.__eq__(node_a)`:

``````>>> class B(object):
...     def __eq__(self, other):
...         print "B.__eq__()"
...         return False
...
>>> class A(object):
...     def __eq__(self, other):
...         print "A.__eq__()"
...         return False
...
>>> a = A()
>>> b = B()
>>> a == b
A.__eq__()
False
>>> b == a
B.__eq__()
False
>>>
``````

Try to override `Node.__eq__()` with an optimized version before resorting to C.

UPDATE

I made this little experiment (python 2.6.6):

``````#!/usr/bin/env python
# test.py
class A(object):
def __init__(self, id):
self.id = id

class B(A):
def __eq__(self, other):
return self.id == other.id

@profile
def main():
list_a = []
list_b = []
for x in range(100000):
list_a.append(A(x))
list_b.append(B(x))

ob_a = A(1)
ob_b = B(1)
for ob in list_a:
if ob == ob_a:
x = True
if ob is ob_a:
x = True
if ob.id == ob_a.id:
x = True
if ob.id == 1:
x = True
for ob in list_b:
if ob == ob_b:
x = True
if ob is ob_b:
x = True
if ob.id == ob_b.id:
x = True
if ob.id == 1:
x = True

if __name__ == '__main__':
main()
``````

Results:

``````Timer unit: 1e-06 s

File: test.py Function: main at line 10 Total time: 5.52964 s

Line #      Hits         Time  Per Hit % Time  Line Contents
==============================================================
10                                           @profile
11                                           def main():
12         1            5      5.0      0.0      list_a = []
13         1            3      3.0      0.0      list_b = []
14    100001       360677      3.6      6.5      for x in range(100000):
15    100000       763593      7.6     13.8          list_a.append(A(x))
16    100000       924822      9.2     16.7          list_b.append(B(x))
17
18         1           14     14.0      0.0      ob_a = A(1)
19         1            5      5.0      0.0      ob_b = B(1)
20    100001       500454      5.0      9.1      for ob in list_a:
21    100000       267252      2.7      4.8          if ob == ob_a:
22                                                       x = True
23    100000       259075      2.6      4.7          if ob is ob_a:
24                                                       x = True
25    100000       539683      5.4      9.8          if ob.id == ob_a.id:
26         1            3      3.0      0.0              x = True
27    100000       271519      2.7      4.9          if ob.id == 1:
28         1            3      3.0      0.0              x = True
29    100001       296736      3.0      5.4      for ob in list_b:
30    100000       472204      4.7      8.5          if ob == ob_b:
31         1            4      4.0      0.0              x = True
32    100000       283165      2.8      5.1          if ob is ob_b:
33                                                       x = True
34    100000       298839      3.0      5.4          if ob.id == ob_b.id:
35         1            3      3.0      0.0              x = True
36    100000       291576      2.9      5.3          if ob.id == 1:
37         1            3      3.0      0.0              x = True
``````

I was very surprised:

• "dot" access (ob.property) seems to be very expensive (line 25 versus line 27).
• there was not much difference between is and '==', at least for simple objects

Then I tried with more complex objects and results are consistent with the first experiment.

Are you swapping a lot? If your dataset is so large that it does not fit available RAM, I guess you may experience some kind of I/O contention related to virtual memory fetches.

Are you running Linux? If so, could you post a vmstat of your machine while running your program? Send us the output of something like:

``````vmstat 10 100
``````

Good luck!

I sugested playing with sys.setcheckinterval and enable/disable the GC. The rationale is that for this particular case (huge number of instances) the default GC reference count check is somewhat expensive and its default interval is away too often.

Yes, I had previously played with sys.setcheckinterval. I changed it to 1000 (from its default of 100), but it didn't do any measurable difference. Disabling Garbage Collection has helped - thanks. This has been the biggest speedup so far - saving about 20% (171 minutes for the whole run, down to 135 minutes) - I'm not sure what the error bars are on that, but it must be a statistically significant increase. – Adam Nellis Feb 9 at 15:10

My guess:

I think the Python GC is based on reference count. From time to time it will check the reference count for every instance; since you are traversing these huge in-memory structures, in your particular case the GC default frequency (1000 cycles?) is away too often - a huge waste. – Yours Truly Feb 10 at 2:06

## Fast tensor rotation with NumPy

At the heart of an application (written in Python and using NumPy) I need to rotate a 4th order tensor. Actually, I need to rotate a lot of tensors many times and this is my bottleneck. My naive implementation (below) involving eight nested loops seems to be quite slow, but I cannot see a way to leverage NumPy's matrix operations and, hopefully, speed things up. I've a feeling I should be using `np.tensordot`, but I don't see how.

Mathematically, elements of the rotated tensor, T', are given by: T'ijkl = Σ gia gjb gkc gld Tabcd with the sum being over the repeated indices on the right hand side. T and Tprime are 3*3*3*3 NumPy arrays and the rotation matrix g is a 3*3 NumPy array. My slow implementation (taking ~0.04 seconds per call) is below.

``````#!/usr/bin/env python

import numpy as np

def rotT(T, g):
Tprime = np.zeros((3,3,3,3))
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
for ii in range(3):
for jj in range(3):
for kk in range(3):
for ll in range(3):
gg = g[ii,i]*g[jj,j]*g[kk,k]*g[ll,l]
Tprime[i,j,k,l] = Tprime[i,j,k,l] + \
gg*T[ii,jj,kk,ll]
return Tprime

if __name__ == "__main__":

T = np.array([[[[  4.66533067e+01,  5.84985000e-02, -5.37671310e-01],
[  5.84985000e-02,  1.56722231e+01,  2.32831900e-02],
[ -5.37671310e-01,  2.32831900e-02,  1.33399259e+01]],
[[  4.60051700e-02,  1.54658176e+01,  2.19568200e-02],
[  1.54658176e+01, -5.18223500e-02, -1.52814920e-01],
[  2.19568200e-02, -1.52814920e-01, -2.43874100e-02]],
[[ -5.35577630e-01,  1.95558600e-02,  1.31108757e+01],
[  1.95558600e-02, -1.51342210e-01, -6.67615000e-03],
[  1.31108757e+01, -6.67615000e-03,  6.90486240e-01]]],
[[[  4.60051700e-02,  1.54658176e+01,  2.19568200e-02],
[  1.54658176e+01, -5.18223500e-02, -1.52814920e-01],
[  2.19568200e-02, -1.52814920e-01, -2.43874100e-02]],
[[  1.57414726e+01, -3.86167500e-02, -1.55971950e-01],
[ -3.86167500e-02,  4.65601977e+01, -3.57741000e-02],
[ -1.55971950e-01, -3.57741000e-02,  1.34215636e+01]],
[[  2.58256300e-02, -1.49072770e-01, -7.38843000e-03],
[ -1.49072770e-01, -3.63410500e-02,  1.32039847e+01],
[ -7.38843000e-03,  1.32039847e+01,  1.38172700e-02]]],
[[[ -5.35577630e-01,  1.95558600e-02,  1.31108757e+01],
[  1.95558600e-02, -1.51342210e-01, -6.67615000e-03],
[  1.31108757e+01, -6.67615000e-03,  6.90486240e-01]],
[[  2.58256300e-02, -1.49072770e-01, -7.38843000e-03],
[ -1.49072770e-01, -3.63410500e-02,  1.32039847e+01],
[ -7.38843000e-03,  1.32039847e+01,  1.38172700e-02]],
[[  1.33639532e+01, -1.26331100e-02,  6.84650400e-01],
[ -1.26331100e-02,  1.34222177e+01,  1.67851800e-02],
[  6.84650400e-01,  1.67851800e-02,  4.89151396e+01]]]])

g = np.array([[ 0.79389393,  0.54184237,  0.27593346],
[-0.59925749,  0.62028664,  0.50609776],
[ 0.10306737, -0.56714313,  0.8171449 ]])

for i in range(100):
Tprime = rotT(T,g)
``````

Is there a way to make this go faster? Making the code generalise to other ranks of tensor would be useful, but is less important.

To use `tensordot`, compute the outer product of the `g` tensors:

``````def rotT(T, g):
gg = np.outer(g, g)
gggg = np.outer(gg, gg).reshape(4 * g.shape)
axes = ((0, 2, 4, 6), (0, 1, 2, 3))
return np.tensordot(gggg, T, axes)
``````

On my system, this is around seven times faster than Sven's solution. If the `g` tensor doesn't change often, you can also cache the `gggg` tensor. If you do this and turn on some micro-optimizations (inlining the `tensordot` code, no checks, no generic shapes), you can still make it two times faster:

``````def rotT(T, gggg):
return np.dot(gggg.transpose((1, 3, 5, 7, 0, 2, 4, 6)).reshape((81, 81)),
T.reshape(81, 1)).reshape((3, 3, 3, 3))
``````

Results of `timeit` on my home laptop (500 iterations):

``````Your original code: 19.471129179
Sven's code: 0.718412876129
My first code: 0.118047952652
My second code: 0.0690279006958
``````

The numbers on my work machine are:

``````Your original code: 9.77922987938
Sven's code: 0.137110948563
My first code: 0.0569641590118
My second code: 0.0308079719543
``````

## My quicksort sorts larger numbers faster? (Quick Python Test Code)

Hey guys I was messing around with Python trying to practice my sorting algorithms and found out something interesting.

I have three different pieces of data:
x = number of numbers to sort
y = range the numbers are in (all random generated ints)
z = total time taken to sort

When:
x = 100000 and
y = (0,100000) then
z = 0.94182094911 sec

When:
x = 100000 and
y = (0,100) then
z = 12.4218382537 sec

When:
x = 100000 and
y = (0,10) then
z = 110.267447809 sec

Any ideas?

Code:

``````import time
import random
import sys

#-----Function definitions

def quickSort(array): #random pivot location quicksort. uses extra memory.
smaller = []
greater = []
if len(array) <= 1:
return array
pivotVal = array[random.randint(0, len(array)-1)]
array.remove(pivotVal)
for items in array:
if items <= pivotVal:
smaller.append(items)
else:
greater.append(items)
return concat(quickSort(smaller), pivotVal, quickSort(greater))

def concat(before, pivot, after):
new = []
for items in before:
new.append(items)
new.append(pivot)
for things in after:
new.append(things)
return new

#-----Variable definitions
list = []
iter = 0
sys.setrecursionlimit(20000)
start = time.clock() #start the clock

#-----Generate the list of numbers to sort
while(iter < 100000):
list.append(random.randint(0,10))  #modify this to change sorting speed
iter = iter + 1
timetogenerate = time.clock() - start #current timer - last timer snapshot

#-----Sort the list of numbers
list = quickSort(list)
timetosort = time.clock() - timetogenerate #current timer - last timer snapshot

#-----Write the list of numbers
file = open("C:\output.txt", 'w')
for items in list:
file.write(str(items))
file.write("\n")
file.close()
timetowrite = time.clock() - timetosort #current timer - last timer snapshot

#-----Print info
print "time to start: " + str(start)
print "time to generate: " + str(timetogenerate)
print "time to sort: " + str(timetosort)
print "time to write: " + str(timetowrite)
totaltime = timetogenerate + timetosort + start
print "total time: " + str(totaltime)
``````

-------------------revised NEW code----------------------------

``````def quickSort(array): #random pivot location quicksort. uses extra memory.
smaller = []
greater = []
equal = []
if len(array) <= 1:
return array
pivotVal = array[random.randint(0, len(array)-1)]
array.remove(pivotVal)
equal.append(pivotVal)
for items in array:
if items < pivotVal:
smaller.append(items)
elif items > pivotVal:
greater.append(items)
else:
equal.append(items)
return concat(quickSort(smaller), equal, quickSort(greater))

def concat(before, equal, after):
new = []
for items in before:
new.append(items)
for items in equal:
new.append(items)
for items in after:
new.append(items)
return new
``````

I think this has to do with the choice of a pivot. Depending on how your partition step works, if you have a lot of duplicate values, your algorithm can degenerate to quadratic behavior when confronted with many duplicates. For example, suppose that you're trying to quicksort this stream:

`````` [0 0 0 0 0 0 0 0 0 0 0 0 0]
``````

If you aren't careful with how you do the partitioning step, this can degenerate quickly. For example, suppose you pick your pivot as the first 0, leaving you with the array

`````` [0 0 0 0 0 0 0 0 0 0 0 0]
``````

to partition. Your algorithm might say that the smaller values are the array

`````` [0 0 0 0 0 0 0 0 0 0 0 0]
``````

And the larger values are the array

`````` []
``````

This is the case that causes quicksort to degenerate to O(n2), since each recursive call is only shrinking the size of the input by one (namely, by pulling off the pivot element).

I noticed that in your code, your partitioning step does indeed do this:

``````for items in array:
if items <= pivotVal:
smaller.append(items)
else:
greater.append(items)
``````

Given a stream that's a whole bunch of copies of the same element, this will put all of them into one array to recursively sort.

Of course, this seems like a ridiculous case - how is this at all connected to reducing the number of values in the array? - but it actually does come up when you're sorting lots of elements that aren't distinct. In particular, after a few passes of the partitioning, you're likely to group together all equal elements, which will bring you into this case.

For a discussion of how to prevent this from happening, there's a really great talk by Bob Sedgewick and Jon Bentley about how to modify the partition step to work quickly when in the presence of duplicate elements. It's connected to Dijkstra's Dutch national flag problem, and their solutions are really clever.

One option that works is to partition the input into three groups - less, equal, and greater. Once you've broken the input up this way, you only need to sort the less and greater groups; the equal groups are already sorted. The above link to the talk shows how to do this more or less in-place, but since you're already using an out-of-place quicksort the fix should be easy. Here's my attempt at it:

``````for items in array:
if items < pivotVal:
smaller.append(items)
elif items == pivotVal:
equal.append(items)
else:
greater.append(items)
``````

I've never written a line of Python in my life, BTW, so this may be totally illegal syntax. But I hope the idea is clear! :-)

## Embedding a Low Performance Scripting Language in Python

I have a web-application. As part of this, I need users of the app to be able to write (or copy and paste) very simple scripts to run against their data.

The scripts really can be very simple, and performance is only the most minor issue. And example of the sophistication of script I mean are something like:

``````ratio = 1.2345678
minimum = 10

def convert(money)
return money * ratio
end

if price < minimum
cost = convert(minimum)
else
cost = convert(price)
end
``````

where price and cost are a global variables (something I can feed into the environment and access after the computation).

I do, however, need to guarantee some stuff.

1. Any scripts run cannot get access to the environment of Python. They cannot import stuff, call methods I don't explicitly expose for them, read or write files, spawn threads, etc. I need total lockdown.

2. I need to be able to put a hard-limit on the number of 'cycles' that a script runs for. Cycles is a general term here. could be VM instructions if the language byte-compiled. Apply-calls for an Eval/Apply loop. Or just iterations through some central processing loop that runs the script. The details aren't as important as my ability to stop something running after a short time and send an email to the owner and say "your scripts seems to be doing more than adding a few numbers together - sort them out."

3. It must run on Vanilla unpatched CPython.

So far I've been writing my own DSL for this task. I can do that. But I wondered if I could build on the shoulders of giants. Is there a mini-language available for Python that would do this?

There are plenty of hacky Lisp-variants (Even one I wrote on Github), but I'd prefer something with more non-specialist syntax (more C or Pascal, say), and as I'm considering this as an alternative to coding one myself I'd like something a bit more mature.

Any ideas?

Here is my take on this problem. Requiring that the user scripts run inside vanilla CPython means you either need to write an interpreter for your mini language, or compile it to Python bytecode (or use Python as your source language) and then "sanitize" the bytecode before executing it.

I've gone for a quick example based on the assumption that users can write their scripts in Python, and that the source and bytecode can be sufficiently sanitized through some combination of filtering unsafe syntax from the parse tree and/or removing unsafe opcodes from the bytecode.

The second part of the solution requires that the user script bytecode be periodically interrupted by a watchdog task which will ensure that the user script does not exceed some opcode limit, and for all of this to run on vanilla CPython.

Summary of my attempt, which mostly focuses on the 2nd part of the problem.

• User scripts are written in Python.
• Use byteplay to filter and modify the bytecode.
• Instrument the user's bytecode to insert an opcode counter and calls to a function which context switches to the watchdog task.
• Use greenlet to execute the user's bytecode, with yields switching between the user's script and the watchdog coroutine.
• The watchdog enforces a preset limit on the number of opcodes which can be executed before raising an error.

Hopefully this at least goes in the right direction. I'm interested to hear more about your solution when you arrive at it.

Source code for `lowperf.py`:

``````# std
import ast
import dis
import sys
from pprint import pprint

# vendor
import byteplay
import greenlet

# bytecode snippet to increment our global opcode counter
INCREMENT = [
(byteplay.STORE_GLOBAL, '__op_counter')
]

# bytecode snippet to perform a yield to our watchdog tasklet.
YIELD = [
(byteplay.CALL_FUNCTION, 1),
(byteplay.POP_TOP, None)
]

def instrument(orig):
"""
Instrument bytecode.  We place a call to our yield function before
jumps and returns.  You could choose alternate places depending on
"""
line_count = 0
res = []
for op, arg in orig.code:
line_count += 1

# NOTE: you could put an advanced bytecode filter here.

# whenever a code block is loaded we must instrument it
if op == byteplay.LOAD_CONST and isinstance(arg, byteplay.Code):
code = instrument(arg)
res.append((op, code))
continue

# 'setlineno' opcode is a safe place to increment our global
# opcode counter.
if op == byteplay.SetLineno:
res += INCREMENT
line_count += 1

# append the opcode and its argument
res.append((op, arg))

# if we're at a jump or return, or we've processed 10 lines of
# source code, insert a call to our yield function.  you could
# choose other places to yield more appropriate for your app.
if op in (byteplay.JUMP_ABSOLUTE, byteplay.RETURN_VALUE) \
or line_count > 10:
res += YIELD
line_count = 0

# finally, build and return new code object
return byteplay.Code(res, orig.freevars, orig.args, orig.varargs,
orig.varkwargs, orig.newlocals, orig.name, orig.filename,
orig.firstlineno, orig.docstring)

def transform(path):
"""
Transform the Python source into a form safe to execute and return
the bytecode.
"""
# NOTE: you could call ast.parse(data, path) here to get an
# abstract syntax tree, then filter that tree down before compiling
# it into bytecode.  i've skipped that step as it is pretty verbose.
suite = compile(data, path, 'exec')
orig = byteplay.Code.from_code(suite)
return instrument(orig)

def execute(path, limit = 40):
"""
This transforms the user's source code into bytecode, instrumenting
it, then kicks off the watchdog and user script tasklets.
"""
code = transform(path)

"""
Task which is yielded to by the user script, making sure it doesn't
use too many resources.
"""
while 1:
if op_count > limit:
raise RuntimeError("script used too many resources")
op_count = target.switch()

target.switch(code, watcher.switch)

"This is the greenlet task which runs our user's script."
globals_ = {'__yield': yield_func, '__op_counter': 0}
eval(code.to_code(), globals_, globals_)

execute(sys.argv[1])
``````

Here is a sample user script `user.py`:

``````def otherfunc(b):
return b * 7

def myfunc(a):
for i in range(0, 20):
print i, otherfunc(i + a + 3)

myfunc(2)
``````

Here is a sample run:

``````% python lowperf.py user.py

0 35
1 42
2 49
3 56
4 63
5 70
6 77
7 84
8 91
9 98
10 105
11 112
Traceback (most recent call last):
File "lowperf.py", line 114, in <module>
execute(sys.argv[1])
File "lowperf.py", line 105, in execute
target.switch(code, watcher.switch)
File "lowperf.py", line 101, in watcher_task
raise RuntimeError("script used too many resources")
RuntimeError: script used too many resources
``````

## Why is LuaJIT so good?

This comparison of programming languages shows that LuaJIT has an over tenfold improvement over the normal Lua implementation. Why is the change so big? Is there something specific about Lua that makes it benefit a lot from JIT compilation? Python is dynamically typed and compiled to bytecode as well, so why doesn't PyPy (that has JIT now, I believe) show such a large jump in performance?

As with every performant system, the answer in the end comes down to two things: algorithms and engineering. LuaJIT uses advanced compilation techniques, and it also has a very finely engineered implementation. For example, when the fancy compilation techniques can't handle a piece of code, LuaJIT falls back to an very fast interpreter written in x86 assembly.

LuaJIT gets double points on the engineering aspect, because not only is LuaJIT itself well-engineered, but the Lua language itself has a simpler and more coherent design than Python and JavaScript. This makes it (marginally) easier for an implementation to provide consistently good performance.

## Why don't scripting languages output Unicode to the Windows console?

The Windows console has been Unicode aware for at least a decade and perhaps as far back as Windows NT. However for some reason the major cross-platform scripting languages including Perl and Python only ever output various 8-bit encodings, requiring much trouble to work around. Perl gives a "wide character in print" warning, Python gives a charmap error and quits. Why on earth after all these years do they not just simply call the Win32 -W APIs that output UTF-16 Unicode instead of forcing everything through the ANSI/codepage bottleneck?

Is it just that cross-platform performance is low priority? Is it that the languages use UTF-8 internally and find it too much bother to output UTF-16? Or are the -W APIs inherently broken to such a degree that they can't be used as-is?

UPDATE

It seems that the blame may need to be shared by all parties. I imagined that the scripting languages could just call `wprintf` on Windows and let the OS/runtime worry about things such as redirection. But it turns out that even wprintf on Windows converts wide characters to ANSI and back before printing to the console!

Please let me know if this has been fixed since the bug report link seems broken but my Visual C test code still fails for wprintf and succeeds for WriteConsoleW.

UPDATE 2

Actually you can print UTF-16 to the console from C using `wprintf` but only if you first do `_setmode(_fileno(stdout), _O_U16TEXT)`.

From C you can print UTF-8 to a console whose codepage is set to codepage 65001, however Perl, Python, PHP and Ruby all have bugs which prevent this. Perl and PHP corrupt the output by adding additional blank lines following lines which contain at least one wide character. Ruby has slightly different corrupt output. Python crashes.

The main problem seems to be that it is not possible to use Unicode on Windows using only the standard C library and no platform-dependent or third-party extensions. The languages you mentioned originate from Unix platforms, whose method of implementing Unicode blends well with C (they use normal `char*` strings, the C locale functions, and UTF-8). If you want to do Unicode in C, you more or less have to write everything twice: once using nonstandard Microsoft extensions, and once using the standard C API functions for all other operating systems. While this can be done, it usually doesn't have high priority because it's cumbersome and most scripting language developers either hate or ignore Windows anyway.

At a more technical level, I think the basic assumption that most standard library designers make is that all I/O streams are inherently byte-based on the OS level, which is true for files on all operating systems, and for all streams on Unix-like systems, with the Windows console being the only exception. Thus the architecture many class libraries and programming language standard have to be modified to a great extent if one wants to incorporate Windows console I/O.

Another more subjective point is that Microsoft just did not enough to promote the use of Unicode. The first Windows OS with decent (for its time) Unicode support was Windows NT 3.1, released in 1993, long before Linux and OS X grew Unicode support. Still, the transition to Unicode in those OSes has been much more seamless and unproblematic. Microsoft once again listened to the sales people instead of the engineers, and kept the technically obsolete Windows 9x around until 2001; instead of forcing developers to use a clean Unicode interface, they still ship the broken and now-unnecessary 8-bit API interface, and invite programmers to use it (look at a few of the recent Windows API questions on Stack Overflow, most newbies still use the horrible legacy API!).

When Unicode came out, many people realized it was useful. Unicode started as a pure 16-bit encoding, so it was natural to use 16-bit code units. Microsoft then apparently said "OK, we have this 16-bit encoding, so we have to create a 16-bit API", not realizing that nobody would use it. The Unix luminaries, however, thought "how can we integrate this into the current system in an efficient and backward-compatible way so that people will actually use it?" and subsequently invented UTF-8, which is a brilliant piece of engineering. Just as when Unix was created, the Unix people thought a bit more, needed a bit longer, has less financially success, but did it eventually right.

I cannot comment on Perl (but I think that there are more Windows haters in the Perl community than in the Python community), but regarding Python I know that the BDFL (who doesn't like Windows as well) has stated that adequate Unicode support on all platforms is a major goal.

## How to install PIL on Mac OSX 10.5.8 for Google App Engine?

I need to get PIL installed locally to test GAE's images api in my local environment.

I grabbed the PIL 1.1.6 installer for Mac, and when I go to select the destination (when installing), I get the error:

``````You cannot install PIL 1.1.6 on this volume.
PIL requires System Python 2.5 to install.
``````

I have Python 2.5.x on this machine.

NOTE:

Added a bounty. I am in real need of a way to test the image API locally on my Mac.

That's quite easy:

1. Install MacPorts
2. Install Python 2.5 with `sudo port install python25`
3. Install Pil for Python 2.5 with `sudo port install py25-pil`
4. In the Google App Engine launcher Preferences set `/opt/local/bin/python2.5` as Python Path *
5. Restart the Google App Engine launcher
6. Happy coding

* be sure to confirm it with an ENTER or it will not persist

## Experience with using h5py to do analytical work on big data in Python?

I do a lot of statistical work and use Python as my main language. Some of the data sets I work with though can take 20GB of memory, which makes operating on them using in-memory functions in numpy, scipy, and PyIMSL nearly impossible. The statistical analysis language SAS has a big advantage here in that it can operate on data from hard disk as opposed to strictly in-memory processing. But, I want to avoid having to write a lot of code in SAS (for a variety of reasons) and am therefore trying to determine what options I have with Python (besides buying more hardware and memory).

I should clarify that approaches like map-reduce will not help in much of my work because I need to operate on complete sets of data (e.g. computing quantiles or fitting a logistic regression model).

Recently I started playing with h5py and think it is the best option I have found for allowing Python to act like SAS and operate on data from disk (via hdf5 files), while still being able to leverage numpy/scipy/matplotlib, etc. I would like to hear if anyone has experience using Python and h5py in a similar setting and what they have found. Has anyone been able to use Python in "big data" settings heretofore dominated by SAS?

EDIT: Buying more hardware/memory certainly can help, but from an IT perspective it is hard for me to sell Python to an organization that needs to analyze huge data sets when Python (or R, or MATLAB etc) need to hold data in memory. SAS continues to have a strong selling point here because while disk-based analytics may be slower, you can confidently deal with huge data sets. So, I am hoping that Stackoverflow-ers can help me figure out how to reduce the perceived risk around using Python as a mainstay big-data analytics language.

We use Python in conjunction with h5py, numpy/scipy and boost::python to do data analysis. Our typical datasets have sizes of up to a few hundred GBs.

• data can be inspected conveniently using the h5view application, h5py/ipython and the h5* commandline tools
• APIs are available for different platforms and languages
• structure data using groups
• annotating data using attributes
• worry-free built-in data compression
• io on single datasets is fast

HDF5 pitfalls:

• Performance breaks down, if a h5 file contains too many datasets/groups (> 1000), because traversing them is very slow. On the other side, io is fast for a few big datasets.
• Advanced data queries (SQL like) are clumsy to implement and slow (consider SQLite in that case)
• HDF5 is not thread-safe in all cases: one has to ensure, that the library was compiled with the correct options
• changing h5 datasets (resize, delete etc.) blows up the file size (in the best case) or is impossible (in the worst case) (the whole h5 file has to be copied to flatten it again)

## Object as a dictionary key

What must I do to use my objects as a key in a Python dictionary (where I don't want the "object id" to act as the key) , e.g.

``````class MyThing:
def __init__(self,name,location,length):
self.name = name
self.location = location
self.length = length
``````

I'd want to use MyThing's as keys that are considered the same if name and location are the same. From C#/Java I'm used to having to override and provide an equals and hashcode method, and promise not to mutate anything the hashcode depends on.

What must I do in Python to accomplish this ? Should I even ?

(In a simple case, like here, perhaps it'd be better to just place a (name,location) tuple as key - but consider I'd want the key to be an object)

You need to add two methods:

``````class MyThing:
def __init__(self,name,location,length):
self.name = name
self.location = location
self.length = length

def __hash__(self):
return hash((self.name, self.location))

def __eq__(self, other):
return (self.name, self.location) == (other.name, other.location)
``````

## Python : Why use "list[:]" when "list" refers to same thing?

Hello,

Consider a list `>>> l=[1,2,3]`.

What is the benefit of using `>>> l[:]` when `>>> l` prints the same thing as former does?

Thanks.

It creates a (shallow) copy.

``````>>> l = [1,2,3]
>>> m = l[:]
>>> n = l
>>> l.append(4)
>>> m
[1, 2, 3]
>>> n
[1, 2, 3, 4]
>>> n is l
True
>>> m is l
False
``````

## A Viable Solution for Word Splitting Khmer?

I am working on a solution to split long lines of Khmer (the Cambodian language) into individual words (in UTF-8). Khmer does not use spaces between words. There are a few solutions out there, but they are far from adequate (here and here), and those projects have fallen by the wayside.

Here is a sample line of Khmer that needs to be split (they can be longer than this):

ចូរសរសើរដល់ទ្រង់ដែលទ្រង់បានប្រទានការទាំងអស់នោះមកដល់រូបអ្នកដោយព្រោះអង្គព្រះយេស៊ូវ ហើយដែលអ្នកមិនអាចរកការទាំងអស់នោះដោយសារការប្រព្រឹត្តរបស់អ្នកឡើយ។

The goal of creating a viable solution that splits Khmer words is twofold: it will encourage those who used Khmer legacy (non-Unicode) fonts to convert over to Unicode (which has many benefits), and it will enable legacy Khmer fonts to be imported into Unicode to be used with a spelling checker quickly (rather than manually going through and splitting words which, with a large document, can take a very long time).

I don't need 100% accuracy, but speed is important (especially since the line that needs to be split into Khmer words can be quite long). I am open to suggestions, but currently I have a large corpus of Khmer words that are correctly split (with a non-breaking space), and I have created a word probability dictionary file (frequency.csv) to use as a dictionary for the word splitter.

I found this python code here that uses the Viterbi algorithm and it supposedly runs fast.

``````import re
from itertools import groupby

def viterbi_segment(text):
probs, lasts = [1.0], [0]
for i in range(1, len(text) + 1):
prob_k, k = max((probs[j] * word_prob(text[j:i]), j)
for j in range(max(0, i - max_word_length), i))
probs.append(prob_k)
lasts.append(k)
words = []
i = len(text)
while 0 < i:
words.append(text[lasts[i]:i])
i = lasts[i]
words.reverse()
return words, probs[-1]

def word_prob(word): return dictionary.get(word, 0) / total
def words(text): return re.findall('[a-z]+', text.lower())
dictionary = dict((w, len(list(ws)))
max_word_length = max(map(len, dictionary))
total = float(sum(dictionary.values()))
``````

I also tried using the source java code from the author of this page: Text segmentation: dictionary-based word splitting but it ran too slow to be of any use (because my word probability dictionary has over 100k terms...).

And here is another option in python from Python word splitting:

``````WORD_FREQUENCIES = {
'file': 0.00123,
'files': 0.00124,
'save': 0.002,
'ave': 0.00001,
'as': 0.00555
}

def split_text(text, word_frequencies, cache):
if text in cache:
return cache[text]
if not text:
return 1, []
best_freq, best_split = 0, []
for i in xrange(1, len(text) + 1):
word, remainder = text[:i], text[i:]
freq = word_frequencies.get(word, None)
if freq:
remainder_freq, remainder = split_text(
remainder, word_frequencies, cache)
freq *= remainder_freq
if freq > best_freq:
best_freq = freq
best_split = [word] + remainder
cache[text] = (best_freq, best_split)
return cache[text]

print split_text('filesaveas', WORD_FREQUENCIES, {})

--> (1.3653e-08, ['file', 'save', 'as'])
``````

I am a newbee when it comes to python and I am really new to all real programming (outside of websites), so please bear with me. Does anyone have any options that they feel would work well?

The ICU library (that has Python and Java bindings) has a DictionaryBasedBreakIterator class that can be used for this.

## Is it possible to deploy a Python application on the Mac App Store?

Does Apple accept Python applications for distribution on the new Mac App Store?

If so, how should the application be packaged? Is py2app sufficient? Something else?

I packaged Pennywise, which is available on the Mac App Store. It's based on Virgil's moneyGuru, which uses Python, PyObjC, and py2app.

You will have to follow Apple's process for preparing an application for submission to the Mac App Store. Most importantly, you will want to add the proper keys to your Info.plist, and remove any automatic updating mechanism, e.g. Sparkle. It's not strictly required, but you will probably also want to implement receipt checking. Using Xcode will make the submission process much easier. You can look at the moneyGuru source code for an example of how to use Xcode as the final part of the build process.

Py2app embeds a copy of the Python framework in the bundle, so I don't know whether Apple would approve an application that only linked to the system framework. While the primary binary can't support PPC, Apple does not seem to check the architectures of binaries in embedded frameworks.

One final caveat: I wouldn't recommend this process for writing new applications. Using Python, PyObjC, and py2app seriously complicates the build process and introduces additional dependencies.

## R equivalent of python "_"?

Python has an identifier `_` that allows for storing the result of the last evaluation which makes it great for speeding up data exploration and introspection.

``````In [1]: 43 * 2
Out[1]: 86

In [2]: _ + 1
Out[2]: 87
``````

Is there a similar command in R?

Tis a faff to type, but .Last.value:

``````> sqrt(2)
[1] 1.414214
> .Last.value
[1] 1.414214
``````

## Python - How to check list monotonicity

What would be an efficient and pythonic way to check list monotonicity?
i.e. that it has monotonically increasing or decreasing values?

Examples:

``````[0,1,2,3,3,4] # This is a monotonically increasing list
[4.3,4.2,-2]  # This is a monotonically decreasing list
[2,3,1]       # This is neither
``````

``````def strictly_increasing(L):
return all(x<y for x, y in zip(L, L[1:]))

def strictly_decreasing(L):
return all(x>y for x, y in zip(L, L[1:]))

def non_increasing(L):
return all(x>=y for x, y in zip(L, L[1:]))

def non_decreasing(L):
return all(x<=y for x, y in zip(L, L[1:]))
``````

## Python's 'in' operator equivalent to C#

With Python, I can use 'in' operator for set operation as follows :

``````x = ['a','b','c']
if 'a' in x:
do something
``````

What's the equivalent in C#?

Most collections declare a `Contains` method (e.g. through the `ICollection<T>` interface), but there's always the more general-purpose LINQ `Enumerable.Contains` method:

``````char[] x = { 'a', 'b', 'c' };

if(x.Contains('a'))
{
...
}
``````

If you think that's the 'wrong way around', you could write an extension that rectifies things:

``````public static bool In<T>(this T item, IEnumerable<T> sequence)
{
if(sequence == null)
throw new ArgumentNullException("sequence");

return sequence.Contains(item);
}
``````

And use it as:

``````char[] x = { 'a', 'b', 'c' };

if('a'.In(x))
{
...
}
``````

## How to use OpenCV in Python?

I have just installed OpenCV on my Windows 7 machine. As a result I get a new directory:

`C:\OpenCV2.2\Python2.7\Lib\site-packages`

In this directory I have two files: `cv.lib` and `cv.pyd`.

Then I try to use the opencv from Python. I do the following:

``````import sys
sys.path.append('C:\OpenCV2.2\Python2.7\Lib\site-packages')
import cv
``````

As a result I get the following error message:

``````File "<stdin>", line 1, in <module>
ImportError: DLL load failed: The specified module could not be found.
``````

What am I doing wrong?

As it was recommended here, I have copied content of `C:\OpenCV2.0\Python2.6\Lib\site-packages` to the `C:\Python26\Lib\site-packages`. It did not help.

My environment variables have the following values:

``````Path=C:\Program Files\MiKTex\miktex\bin;C:\OpenCV2.2\bin;C:\Python26;
PYTHONPATH=C:\OpenCV2.2\Python2.7\Lib\site-packages
``````

Do I need to change something? Do I need to add something?

I think my question is general: How to use a library? Probably I need to find a *.ddl file somewhere? Then I need to use the name of the directory containing this file as a value to some environment variables? Or maybe I need to use `sys.addpath`? I also need to know how the way to call the library is related to the name of the file that contains the library.

It is interesting that when I type `import cv`, I get:

``````ImportError: DLL load failed: The specified module could not be found.
``````

But when I type `import opencv` I get:

``````ImportError: No module named opencv
``````

It has been suggested that I use inconsistent version of python. In more details, OpenCV tries to use Python2.7 and I had Python2.6. So, I have installed Python 2.7. It makes difference. Now I do not have the old error message, but I have a new one:

``````ImportError: numpy.core.multiarray failed to import
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: numpy.core.multiarray failed to import
``````

I have managed to resolve the problem by installing numpy. It took some time because I did not realized that there are different numpy installer corresponding to different versions of Python. Some details can be found in my answer to my own question (see bellow).

The problem was resolved. The following steps has been done:

1. A new version of python (version 2.7) has been installed.
2. After that I still was unable to run OpenCV because I had some problems with the numpy library.
3. I tired to install numpy but the installer did not see my new version of the Python.
4. I deleted the old version of Python as well as links to the old version in the Path system vatriable.
5. After that numpy installer was not able to finish the installation.
6. I have realized that I need to run another numpy installer that is associated with the Python 2.7. It can be found here.
7. Finally everything worked. I was able to "import cv".

## Django Storage Backend for S3

I'm looking for a good Django custom storage backend for use with Amazon S3.

I've been googling around and found a lot of blog posts with code snippets or half-baked gist.github.com one-off jobs. But I can't seem to find a solid, well-tested one.

Is there a widely accepted standard Amazon S3 Django custom storage backend out there? It doesn't particularly matter to me what Python backend library it uses--i.e., either `S3.py` or `boto` are fine.

Have you checked out django-storages? I would lean towards the boto library as I have had good experiences with boto.

## Storing a python set in a database with django

I have a need to store a python set in a database for accessing later. What's the best way to go about doing this? My initial plan was to use a textfield on my model and just store the set as a comma or pipe delimited string, then when I need to pull it back out for use in my app I could initialize a set by calling split on the string. Obviously if there is a simple way to serialize the set to store it in the db so I can pull it back out as a set when I need to use it later that would be best.

If your database is better at storing blobs of binary data, you can pickle your set. Actually, pickle stores data as text by default, so it might be better than the delimited string approach anyway. Just pickle.dumps(your_set) and unpickled = pickle.loads(database_string) later.

## Django: "projects" vs "apps"

I have a fairly complex "product" I'm getting ready to build using Django. I'm going to avoid using the terms "project" and "application" in this context, because I'm not clear on their specific meaning in Django.

Projects can have many apps. Apps can be shared among many projects. Fine.

I'm not reinventing the blog or forum - I don't see any portion of my product being reusable in any context. Intuitively, I would call this one "application." Do I then do all my work in a single "app" folder?

If so... in terms of Django's `project.app` namespace, my inclination is to use `myproduct.myproduct`, but of course this isn't allowed (but the application I'm building is my project, and my project is an application!). I'm therefore lead to believe that perhaps I'm supposed to approach Django by building one app per "significant" model, but I don't know where to draw the boundaries in my schema to separate it into apps - I have a lot of models with relatively complex relationships.

I'm hoping there's a common solution to this...

What is to stop you using `myproduct.myproduct`? What you need to achieve that roughly consists of doing this:

``````django-admin.py startproject myproduct
cd myproduct
mkdir myproduct
touch myproduct/__init__.py
touch myproduct/models.py
touch myproduct/views.py
``````

and so on. Would it help if I said `views.py` doesn't have to be called `views.py`? Provided you can name, on the python path, a function (usually package.package.views.function_name) it will get handled. Simple as that. All this "project"/"app" stuff is just python packages.

Now, how are you supposed to do it? Or rather, how might I do it? Well, if you create a significant piece of reusable functionality, like say a markup editor, that's when you create a "top level app" which might contain `widgets.py`, `fields.py`, `context_processors.py` etc - all things you might want to import.

Similarly, if you can create something like a blog in a format that is pretty generic across installs, you can wrap it up in an app, with its own template, static content folder etc, and configure an instance of a django project to use that app's content.

There are no hard and fast rules saying you must do this, but it is one of the goals of the framework. The fact that everything, templates included, allows you to include from some common base means your blog should fit snugly into any other setup, simply by looking after its own part.

However, to address your actual concern, yes, nothing says you can't work with the top level project folder. That's what apps do and you can do it if you really want to. I tend not to, however, for several reasons:

• Django's default setup doesn't do it.
• Often, I want to create a main app, so I create one, usually called `website`. However, at a later date I might want to develop original functionality just for this site. With a view to making it removable (whether or not I ever do) I tend to then create a separate directory. This also means I can drop said functionality just by unlinking that package from the config and removing the folder, rather than a complex delete the right urls from a global urls.py folder.
• Very often, even when I want to make something independent, it needs somewhere to live whilst I look after it / make it independent. Basically the above case, but for stuff I do intend to make generic.
• My top level folder often contains a few other things, including but not limited to wsgi scripts, sql scripts etc.
• django's management extensions rely on subdirectories. So it makes sense to name packages appropriately.

In short, the reason there is a convention is the same as any other convention - it helps when it comes to others working with your project. If I see `fields.py` I immediately expect code in it to subclass django's field, whereas if I see `inputtypes.py` I might not be so clear on what that means without looking at it.

## Rebuild regex string based on match keywords in python

Example regular expression

``````regex = re.compile('^page/(?P<slug>[-\w]+)/(?P<page_id>[0-9]+)/\$')
matches = regex.match('page/slug-name/5/')
>> matches.groupdict()
{'slug': 'slug-name', 'page_id': '5'}
``````

Is there an easy way to pass a dict back to the regex to rebuild a string?

i.e. `{'slug': 'new-slug', 'page_id': '6'}` would yield `page/new-slug/6/`

Here's a solution using sre_parse

``````import re
from sre_parse import parse

pattern = r'^page/(?P<slug>[-\w]+)/(?P<page_id>[0-9]+)/\$'
regex = re.compile(pattern)
matches = regex.match('page/slug-name/5/')
params = matches.groupdict()
print params
>> {'page_id': '5', 'slug': 'slug-name'}

lookup = dict((v,k) for k, v in regex.groupindex.iteritems())
frags = [chr(i[1]) if i[0] == 'literal' else str(params[lookup[i[1][0]]]) \
for i in parse(pattern) if i[0] != 'at']
print ''.join(frags)
>> page/slug-name/5/
``````

This works by grabbing the raw opcodes via parse(), dumping the positional opcodes (they have 'at' for a first param), replacing the named groups, and concatenating the frags when it's done.