Monday, April 21, 2014

DIY animal surveys (part 2)

So after the success of my first forays into using motion detection to film the neighbourhood cats, I thought maybe I'd get a little bolder and set up the equipment next to the house. I originally decided against this because I thought any cats (especially kittens) would be scared off by the proximity to light and humans, but considering how bold the last one was, it'd be worth a try!

The next morning, a quick perusal of the food bowl suggested that nothing had been eaten, so I wasn't feeling particularly optimistic as I went to review the footage - yet again, I needn't have worried. This time I picked up not one, but two feline feeders, obviously working together:

After their first joint perusal of the offerings on display, they individually came back to the bowl...

... and laptop...

... again...

... and again - often looking around curiously at objects (or potentially off-screen cats) as they did so.

The black cat was evidently the wilier of the two - while the above photos were all taken in the space of five minutes, it returned a couple of hours later apparently having ditched its companion to see if any tastier food had magically appeared in the bowl that it could have for itself.

Tuesday, April 15, 2014

DIY animal surveys

Our neighbourhood is a cat neighbourhood. Walking along the streets at dusk or after dark, you can see at least a small handful of local cats prowling around or sitting smugly on their owners' driveways soaking up the last bit of heat of the day. So it didn't come as any surprise to me that every time I discarded the scraps outside that our own (indoor) cat for whatever reason didn't eat, they'd invariably be gone the following day.

I thought it worth investigating exactly which cat was taking these scraps. We've occasionally seen kittens wandering around our yard and more regularly around the neighbourhood, and I was a bit concerned for their welfare - so I thought it would be good to know if they were feeding in our yard and whether they could be collected for a rescue shelter.

So I got out my old crappy laptop with its old crappy webcam and set it up outside in our garage, somewhere that rain/wind wouldn't bother it (though it's old enough that I wouldn't have been too distraught if something did happen to it), and turned on a motion capture software program (I can thoroughly recommend yawcam - it's free!). I was unsure whether our nightly visitor would be put off by the outside light I left on for the webcam to be able to see, and my fiancee was understandably cynical as to whether the process would work at all. So come next morning, I rushed out to reclaim my laptop, and after flicking through the images captured during the night, felt vindicated at seeing this photo come up at 12.11am:

I needn't have worried, though, as I'd forgotten two basic attributes of cats. Firstly, they are curious and attracted to new and interesting objects - and secondly, they're attracted to warm objects. The laptop that had been running all night out in the cold was both of these things! Thus, at 2.23am, the vision went entirely black, followed by images of the cat walking directly in front of the laptop sniffing at it:

Then an hour later at 3.14am it returned for another look at the laptop before scurrying off, not to be seen again in the footage (though it may well have returned - the laptop stopped recording when Windows decided to restart after downloading a security update... a lesson for anyone wanting to try this at home!)

It just goes to show that with the modern (and sometimes slightly less modern) technology we have available and take for granted, it's actually pretty easy to set up some fun and interesting projects to see what's just outside your door. It's probably worth noting, though, that the webcam didn't actually pick up any evidence of said cat eating the food left out for it, even though it was definitely gone the next morning!

Tuesday, January 7, 2014

Testing the Bechdel Test

So, recently this article came out showing that of the top 50 movies of 2013, those that passed the Bechdel Test made more money overall at the US Box Office than those that didn't. For those not in the know, the Bechdel Test evaluates whether a movie has two or more named women in it who have a conversation about something other than a man. The test seems simple enough to pass, but surprisingly quite a lot of movies don't! Of the 47 top movies that were tested, only 24 passed the test (and at least* seven of those were a bit dubious). Gravity was understandably excluded from the test because it didn't really have more than two named characters**, and apparently no-one has bothered to test the remaining two.

The article comes with this nifty little infographic:

I've seen a couple of complaints on the web by people saying that this isn't enough proof - the somewhat ingenuous reasoning I saw was that the infographic shows totals and not averages, so can't prove that the average Bechdel-passing film performs better. Though there are more passes (24) than fails (23), the difference is not nearly enough to account for the almost 60% difference in total gross sales. The averages can quickly be calculated from the infographic above - the average passing film makes $176m, and the average failing film makes $116m, still a very substantial $60m difference!

A more reasonable criticism is that it may be possible that things just happened this way by chance. Maybe this year a handful of big films happened to be on the passing side, and if they had failed there'd be no appreciable difference? Well, we can test that as well using the information in the infographic. All we need to do is run what's called a randomisation test - this is where we randomly allocate the 50 tested movies in this list to the "pass", "fail" and "excluded" categories in the same numbers as in the real case (so, 24 passes, 23 fails, 3 excluded). We can use a random number generator to do this, or if you're playing along at home, put pieces of paper in a hat, whatever. We repeat this process a large number of times (I did it 10 million times) and see how often we can replicate that $60m difference between passing and failing films or better by chance alone.

It turns out that when you put your pieces of paper in a hat to make your own test, you'll only be able to beat the actual difference 0.71% of the time, or about 1 in 140 times. This is pretty good evidence that it's not a fluke and that the Bechdel Test really did influence movies' bottom lines this past year. One thing that we can't say based on this is whether this is a direct effect - i.e. that people consciously or subconsciously decided to go watch passing films over failing films. It could be that there is some indirect, or confounding effect, causing this phenomenon. For example, maybe directors who write films that pass the test tend to be better filmmakers in other ways which make people want to watch their films more? Either way, a trend towards more women in substantial roles in films can be no bad thing! (though it's worth mentioning that passing the Bechdel test by no means guarantees a "substantial role", and even failing movies can have their strong points - see this link)

* Having watched Man of Steel, I'd argue that it was pretty dubious too - I think the only non-about-a-man conversations between two women were one-sided one liners (hardly a conversation)... in any case, any feminist points it may have gained were swiftly taken away in my book by the female US Air Force Captain being mostly portrayed like a ditz rather than as a dedicated leader of people required for the rank. More here.
** So I'm told. I haven't watched it yet.

Monday, September 9, 2013

Senate number crunching

For those outside Australia, or for those Australians who are living (or, understandably, hiding) under a rock, we've just had our national elections, at which our all of the seats of our government have been decided and half of the seats in our Senate (the house of review).

Though almost all of the seats in the lower house have been decided, which is normal for election night, the results for the Senate generally take days to weeks to be fully finalised. Though most of the seats are generally worked out fairly quickly - in particular, those seats going to the major parties - the remaining few seats are far less certain.

The use of the Single Transferable Vote system for the Australian Senate means that votes for minor parties go through a convoluted process of 'transfer' from candidate to candidate, which is further complicated by the Group Voting Ticket system and the deals made by minor parties with each other for preferences. What this means is that a party receiving a very small number of votes can obtain a seat in the Senate simply by the snowballing of preferences from other small parties.

This has been particularly apparent in this election, with the current estimated results by the ABC suggesting that as many as 8 seats are likely to go to parties outside of the main three (the Liberal/National coalition, the Australian Labor Party and the Australian Greens), with seats controversially likely to go to members from the Australian Sports Party and Australian Motoring Enthusiasts Party, which only received a tiny fraction of the initial vote. The popular media has already heavily covered these results even though they are still by no means yet certain.

Because of the above complexities, it can take only a small variation in voting to change the result for one or more seats. In this sense, the ABC's estimate is fairly naive: they assume that all voters have voted 'above the line', allowing their preferences to be decided by their chosen party (though this is not so far from the truth, with over 95% of voters generally doing so) and that the final results will be accurately represented by the results that have come in so far (between 50-80% of the vote for each state). Working out what potential bias there may be in the remaining votes is possible to a certain extent, as the voting information includes voting breakdowns for smaller regions (and can be compared with past elections), and some regions are known to have regular skews in their voting patterns.

What I've done here more simply, however, is to look at how much effect there might be in random fluctuations in the remaining votes to be counted. I assumed that the proportions of votes to each party so far were an accurate representation of the electorate's intent - based on those numbers, I randomly generated the remaining expected votes to be counted (based on current enrolment numbers and last election's turnout - around 94% on average).

For Tasmania, for example, my results usually follow the ABC's results - two each of Labor and Liberal senators are elected, one Greens senator, and one from the Palmer United Party are elected as expected. However, in about 4% of cases (for 1000 election runs) a member of the Sex Party is elected instead of the Palmer United candidate, and in a further 1% of cases a third Liberal Party member is elected.

Taking into account the other sources of fluctuation mentioned above adds to this uncertainty in the results - the Geeklections site and the Truth Seeker blog go into much more detail. This only goes to show that surprises are not only possible but likely as the counting continues...

Monday, August 12, 2013

Why we can't really see the stars

If you're like me, you enjoy looking up at the stars at night and thinking about how far away they are, and such things. Recently, though, I started wondering why there aren't any high quality images of stars other than our sun. The star with the largest apparent size from Earth (after the sun, again) is currently believed to be R Doradus - and the photograph of that on Wikipedia isn't exactly spectacular:

I don't know anything much about astronomy so this seemed strange to me. If I can see the stars with my naked eye, what's to stop someone with a high powered telescope zooming in and getting good details?

The reason, as I found out, is that stars are much, much further away than they look when viewed with the eye. The main reason for this is that every lens, including the human eye, has a limit to the resolution it can see. This is known as the 'diffraction limit' because once light travels through an aperture (in our case, our pupils), the waves spread out before hitting the detector (our retinas), blurring each point into what is called an Airy disk. For a human with 20/20 vision, the Airy disk is about an arcminute in size - so our sight can resolve something 1 inch in diameter from about 90 metres away. Every star we see looks 'blurred' to about this size - which is why all stars in the sky (except, once more, for the sun) look the same size.

To be able to escape the diffraction limit, we need a much larger lens - which is why we use telescopes. However, once a telescope reaches about 10cm in diameter, another effect stops us from seeing the star - a phenomenon known as 'astronomical seeing'. This is the effect caused by variations in temperature and wind speed in the atmosphere causing the light to bend on the way to the receiver. The 'twinkling' that can sometimes be seen in stars is due to this effect, as the apparent position of the star moves with the constantly changing conditions in the atmosphere.

At a good astronomical site, astronomical seeing will allow for a resolution of around 1 arcsecond. As illustrated above, this is roughly sixty times smaller (in blue) in length than human vision (in white) but even this is not enough to see a star. Below is the resolution with atmospheric seeing in blue again, but with R Doradus pictured in red - with a radius of 0.057 arcseconds. The only reason that ground-based telescopes are able to image R Doradus at all is by using adaptive optics - this attempts to compensate for the atmopsheric effects, and even this technology is currently only just enough to get a picture.

A large enough orbiting telescope would get past both of these effects - the Hubble Space Telescope is still one of the largest with a mirror 2.5 metres in diameter*, which translates to a 0.05 arcsecond resolution for visible light: only just enough to see R Doradus. 

So humanity has a long way to go yet before we can really see the stars. Now if only I could afford a telescope...

* the largest, the Herschel Space Telescope, has a diameter of 3.5 metres.

Monday, July 22, 2013

Crappy days

Some days you just know are going to be long and painful. I have a few strategies to survive mine:

1. Sugary substances

Chocolate in any form is always appreciated, but on cold, miserable winter days a nice warming hot chocolate or Milo (link for those not in Milo-drinking countries) can make it all seem a little better.

2. Cute things on the internet

It's an internet cliche because it works - my girlfriend (who now has a blog!)  is usually my main source of such links. However, I always keep this one on standby for particularly bad days - it takes a cold soul indeed not to find this one cute:

3. Puzzles

When it's hard for me to concentrate on things I should be actually working on, I sometimes find doing some puzzles a good way to keep my brain ticking over. My current favourite is Project Euler (warning - non-programmers will really struggle!)

4. Music

I'm regularly surprised by how much music can help turn a mood around or focus the energies - I've never been much of an electronica fan, but iriXx's work has given me some of my most productive afternoons. I tend to listen to the same music over and over again before moving on to another artist - one on my current high-rotation list is Tasmanian act Enola Fall.

5. Writing

Sometimes it's good just to blow off some steam - as screaming in my office would probably cause some distress in my nearby colleagues, writing things down is a little safer. Chatting to friends online, writing blog posts, writing out to-do lists and plans - it all helps!

Friday, July 12, 2013

Mathematically possible - GWS making the AFL finals

(Update: thanks to Gazza White and the AFL subreddit for linking my post - it's already by far my most popular blog post!)

Towards the end of a sporting season, it's not unusual to hear the commentators call a team a "mathematical" chance to achieve some target - be that winning a premiership, making the finals, avoiding relegation, whatever. What this means is that there is at least one combination of events (usually discounting other teams being disqualified) that could bring it about, but it's almost vanishingly unlikely to occur.

Very seldom is this a more appropriate term than for the current chances of Greater Western Sydney getting into the top 8 and making the AFL finals this year - so much so that commentators probably aren't even aware that it is a mathematical possibility.

Here is the current AFL ladder as of the end of Round 15 (courtesy of FanFooty - note that the official AFL ladder is not actually up to date!)

Team P W D L For Agt Percent. Pts
Hawthorn 14 12 0 2 1645 1167 141 48
Geelong 14 12 0 2 1556 1216 128 48
Essendon 14 11 0 3 1483 1142 129.9 44
Sydney 14 10 1 3 1379 1048 131.6 42
Fremantle 14 10 1 3 1201 954 125.9 42
Richmond 14 9 0 5 1387 1190 116.6 36
Collingwood 14 9 0 5 1321 1225 107.8 36
Pt Adelaide 14 8 0 6 1317 1158 113.7 32
West Coast 14 7 0 7 1404 1277 109.9 28
North Melb. 14 6 0 8 1435 1210 118.6 24
Carlton 14 6 0 8 1331 1219 109.2 24
Adelaide 14 6 0 8 1288 1228 104.9 24
Gold Coast 14 5 0 9 1197 1341 89.26 20
Brisbane 14 5 0 9 1133 1451 78.08 20
W. Bulldogs 14 4 0 10 1102 1433 76.9 16
St Kilda 14 3 0 11 1129 1337 84.44 12
Melbourne 14 2 0 12 981 1775 55.26 8
W. Sydney 14 0 0 14 1003 1921 52.21 0

In green is our team of interest - Greater Western Sydney. They are currently winless at the bottom of the ladder, 8 wins behind the lowest top 8 side (Port Adelaide - in red). Unfortunately for GWS, there are also 8 games left in the season, so one thing is immediately clear: GWS must win all 8 of their games, and Port Adelaide lose all 8 of theirs, for GWS to be any chance of making the finals (the two teams do not play each other, so this accounts for 16 separate games). If this happens, the ladder looks like this:

Hawthorn 52
Geelong 52
Fremantle 46
Essendon 44
Sydney 42
Richmond 36
Collingwood 36
Port Adelaide 32
GWS 32
West Coast 28
Carlton 28
Adelaide 28
North Melbourne 24
Gold Coast 24
Brisbane 24
Western Bulldogs 16
St Kilda 16
Melbourne 8

This on its own is still not enough to guarantee GWS a place, however - there are 9 other teams on the ladder that are also striving for a spot in the top 8. For GWS to make the finals, none of these sides can finish with more than 32 points (8 wins) at the end of the season. Therefore every game that involves one of these sides - 46 games, excluding the 16 already accounted for by GWS and Port's games - can make or break GWS's finals chances. In particular, West Coast, Carlton and Adelaide cannot get any more than 1 win for the rest of their remaining games. In fact, there are only 10 games that don't affect GWS's chances - the games between top 7 sides, who already have more wins than GWS can possibly get and are guaranteed to place above them on the ladder.

Using a computer to calculate the possible combinations in which this could happen comes up with 150,744 ways for GWS to place equal 8th. Even assuming that all teams will have a 50-50 chance of winning each game for the rest of the season (discounting draws), an assumption which is very kind to GWS to say the least, this would give them a 150,744 / 262 = 3.27 in a hundred thousand billion chance of finishing equal 8th on points.

To put this into perspective, imagine a lottery where you have to pick which 6 balls out of 40 will be drawn - a 1 in 3.8 million chance. Now imagine only entering that lottery twice in your life - and winning both times. Even THAT would be twice as likely as GWS finishing equal 8th, on a good day.

Notice that I've mentioned GWS finishing equal 8th. Even this herculean feat doesn't guarantee them a place - in the very best case scenario of the 150,744, there will be 6 teams vying for 8th place on 32 points (on average in these scenarios, there will be 9.6). So GWS's best-case scenario looks like this:


Geelong 76
Fremantle 66
Hawthorn 64
Essendon 60
Sydney 54
Collingwood 52
Richmond 48
Port Adelaide 32
Carlton 32
Adelaide 32
Gold Coast 32
Western Bulldogs 32
GWS 32
West Coast 28
North Melbourne 28
Brisbane 28
St Kilda 28
Melbourne 28

To make the finals, from this point they need to gain a higher percentage than the other 5 teams. Currently, they are on 52.21%, having scored only 1003 against their opponents' 1921 points. On the other hand, their currently best-placed opposition, Port Adelaide, has a percentage of 113.7%, scoring 1317 to their opponents' 1158.

This informative site tells us that the average score in an AFL this season so far is 92.43, and the average margin for a game is 36.92. So a roughly "average" game of AFL would involve the winner with 110.89 points and the loser with 73.97. If we assume that GWS's 8 winning games follow this scoreline, as well as Port's 8 losing games, then we end up with GWS having an improved percentage of 75.22% and Port with a dented percentage, but still plenty enough for finals, of 93.33%.

So, obviously just winning is not going to be enough for GWS to leapfrog Port and its other finals rivals. Let's assume the same as above, but this time work on the assumption that GWS has somehow found a secret scoring weapon and is able to rack up ridiculous scores while keeping their opponents to an average score of 73.97. They would need to be able to score, on average, 167.78 points in order to beat Port's percentage - an average winning margin of 93.8 on their run home - and hope that none of their other rivals have had a similar late-season percentage boost themselves. I'll leave it to someone else to work out how often a team has won 8 games in a row by an average margin of at least 93.8 in AFL history.

Our conclusion: is it possible for GWS to make the finals? Mathematically, yes. Are they going to make the finals? No. But it'd be a hell of a story if they did!