Matt Webb entertains the crowd with his descriptions of experiments on psychology undergraduates. Funny and interesting stuff.
Co-authoring a book for O’Reilly, currently called, Mind Hacks. His co-author is a neuroscientist, unlike Webb. Talking about the cool things he saw while putting the book together.
Pretend he’s not using “functionality”, “design” or “architecture” as he gets in trouble whenever he does, in relation to the brain.
Vision. “What to do with too much information is the great riddle of our time.”, Theodore Zeldin, An Intimate History of Humanity. Only consider part of our vision - ignore motion blur, peripheral vision, etc. Rods look for brightness, cones look for colour and the ‘resolution’ is much greater in the centre of the eye. You’re only really good at colour in the centre of your eye. Walk down the road, with cars coming from behind you and see how long it takes to tell what colour the cars are - you can see them moving before you can tell the colour.
On any Mac/Windows there are little status icons in a corner of the screen - because they’re outside the centre of your vision they should be black and white.
There are two types of counting. Up to four or five you can tell instantly how many things there are. 30 milliseconds per item up to four or five, 80 milliseconds per item after that.
Things that appear 3D stand out from flat things. Antelopes have white bellies, which counteracts the effect of light and shade, makes them look flatter. [Matt is wearing a black woolly hat, possibly so tigers don’t eat him.] Windows 3(?) got 3D buttons which makes them stand out, afford pressing.
We only need high res in the centre of the eyes because it’s “cheap” to move our eyes. Birds don’t move their eyes, but they have small heads and move those a lot. You can’t see your eyes move if you look in a mirror: 6 inches from the mirror try looking at each pupil in turn - you can’t see the eyes move. Our vision “shuts off” when we move our eyes. Possibly damps down the responses of the parts that respond to motion.
Attention is hard to explain. Is what’s happening inside your computer automatic? Some of it is - click a button and things happen. Automatic, but are triggered by you. But you don’t have completely free will - if it pops up a dialog box with a choice you have to do one or the other. [The point of the analogy is lost on me.] You give more attention to where things are happening - if you have music playing at the back of your car and you listen to it you’re paying less attention to what’s at the front of the car. If someone touches your arm you concentrate more on that area of your body. Some things get your attention more — as they come towards you, or if the object is bigger.
[He shows a couple of images that change to see who notices what changes — they flick back and forth between different states.]
He describes a test where people have to watch people playing basketball and count how many passes are made by people in white shirts. Part-way through the game a man in a gorilla suit walks across the court, bangs his chest, then walks off. Afterwards a huge number of people report not having seen the gorilla. [He shows a video, which gets a huge laugh, less so the second time when everyone is trying to count the passes. Would have been interesting to have seen this before knowing about the test. There’s a similar video here.]
Flickr’s daily zeitgeist. A grid of twelve images. A big photo fades in, slowly so you don’t see it. Then it suddenly shrinks and grabs your attention. But you’ve already missed what happened and the image has shrunk down. Should maybe be the other way round. (example in Matt Jones’s right-hand column.)
Experiment. Watch a screen for whether cats or dogs appear. If a cat appears, hit a button on the left, if a dog, hit a button on the right. If a cat appears on the left of the screen our response is quicker than if it’s on the right - our attention has been dragged to that side. Similar with looking at some eyes: if blue, hit the left button, green hit the right. But if blue buttons appear looking to the right, we’re slower. We follow the gaze. BUT if the eyes are stylised eyes, square, the effect doesn’t happen. They don’t look enough like eyes. We’re optimised for faces. We’re also optimised for groups [doesn’t say much about this].
Reading statements can influence your mood - we were given pieces of paper with things like “I feel a little low today” (happy to one half of the room, sad to the other half), but we all seem fairly perky. Describes an experiment where one room of people is given happy statements, one depressive. The depressive group are sitting quietly after ten minutes, the happy group are all chatting, but both groups deny they’ve been influenced by statements.
If you see hostile faces you feel more hostile. Mood is contagious. So are gestures - people will scratch their face if you do for example. Someone describes how you can pick up someone’s gestures on purpose, and then you can gradually direct them where you want to go by getting them to imitate you. Rael describes a class of people who arrange beforehand to look slightly to the left of their professor’s eyes - the professor moves to the left.
Describes an experiment. They got graduates to rearrange sentences. For half the group the words were normal distribution. For the other half the words were related to the elderly, “patient”, “wise”, “grey”, etc. When the students left the room they were timed walking down the corridor — the group with elderly words took longer to walk down it. People primed with “professor” words do better at Trivial Pursuit. Primed with “supermodel” words, they do worse.