rockym93 dot net

archive · tags · feed

Friday, 13 January 2017

Posted at 12:35PM under Fun With Neural Networks.

Why would you do that?

Why a neural network in particular? No reason at all. Neural networks are just one way of doing machine learning, but I think they're a pretty interesting one.

You take a program that's easy for another program to change. You give it lots of ways to change, and lots of chances to change, and a testable goal it's trying to achieve, with lots of examples to check against. The rest is basically trial and error on an enormous scale.

My network ran through 23650 generations, and took all night to do it. And at the end, my computer had programmed itself to write. Not just to write, but to write like me. Supposedly.

How is that not intriguing?

Neural Network

Why do this at all though? There are a couple of reasons.

Firstly, because jumping into the deep end of something is the best way to understand it. Machine learning is playing a bigger and bigger part in our world, and I think that understanding how your world works is important. Also, learning new stuff is fun!

There was definitely an element of curiosity, too. Two of the most interesting bits of the internet I watch and listen to, Idea Channel and Flash Foward, have had a go a this. In fact, that's... where I got the idea. If Idea Channel can generate interesting nonsense with five years of scripts, then surely I could get something with over ten of blogs, right?

I wanted to see what a network would think I was like. It's almost a bit of self-discovery. What patterns do I have that a network would be able to discover?

I primed the network with the phrase, 'This post was written by a neural network.', and used a 'temperature' of 0.5, where 0 is entirely unoriginal and 1 is entirely too original. I generated a couple of posts, and picked the one that made me go 'whoa' - the post which not only included correct paragraphing and punctuation, but which had generated an entirely valid image embed code. Broken, sure - the filename it links to doesn't exist - but everything else about the link was perfectly formatted.

Reading this stuff is weirdly compelling, even though it's utterly meaningless. Some of it is surprisingly poetic. I felt bad editing it, like I was changing something that someone had worked really hard on. I can't bring myself to delete any of it either, even though it's the opposite of precious - this thing could literally generate more stuff than I have time to read in a lifetime. And somehow I keep wanting to read more, to pan for gems, wondering where that train of thought was going even though I know there's nothing there doing any thinking. Something about it is close enough to real to keep you looking for personality and meaning in structured nonsense, and I think that's completely fascinating.

What was all that about?

0 comments

Thursday, 12 January 2017

Posted at 06:16PM under Fun With Neural Networks.

What was all that about?

This post was written by a neural network.

A network of units, connected together.

A neural network is basically a collection of little units, all chained together in kind of a grid. Or, well, a network. These units are very simple computer programs. They get signals from the units behind them, weight those signals according to some settings, and if they meet a threshold, they signal the next units along - which do the same thing, all the way through the network.

So, for any given input, you get some units activating and passing on signals, and some not. Eventually, you get a result out the other end which depends on the input you put in, but in a pretty complex and convoluted way.

A network of units, activating.

You adjust a network by changing the weightings that the units give their incoming signals, and the thresholds that control whether or not they send signals. Different settings in different parts of the network mean that different bits of the network - different paths through it - will light up and transmit signals from end to end. And different signals at the end combine to produce a different result.

Units with different weightings and thresholds.

You assess how close the results are to the ones you want by using a cost function. If the network is spitting out the wrong results, the cost will be higher. If the result is right, the cost will be lower.

Cost is the mathematical difference between what the network is doing and what we want it to do - an error value. We pass this error value back through the network, and adjust the network's internal weights and thresholds to try and make the error smaller next time. And so, over many iterations, the network gets better at doing what it's supposed to be doing.

These values are too different. And also two different!

After enough training, all of those little settings are perfectly tuned, through trial and error on a massive scale. You'll have a program that gives you exactly the results you trained it to.

The advantage that this has over programming a computer the traditional way, with explicit instructions, is that a neural network can learn pretty abstract processes or concepts or patterns. It can learn things would be very difficult to describe how to do to a computer explicitly and logically or mathematically - but as long as you can check that the result is correct reliably, you can train a network to do it.

The result coming from the network could be a number. You could train it to do maths - any maths, in theory. Or it could be a category (probably expressed as a number) so you could train it to sort things. Or it could be a word (again, probably also expressed as a number), so you could train it to 'describe' things. As long as you can check it against something, you can train a network to do it.

This makes neural networks really good at processing large amounts of real-world data, where yes, there are patterns - but programming computers to detect them is hard. Things like image recognition, or human language, or speech processing. Neural networks let computers be a bit better at the kind of things that brains are usually good at.

Still with me? Good.

Our little network, with its inputs and results, is all well and good if you want just one answer for one question at a time, but what if - like many complex problems - your problem depends on context? This is where recurrent neural networks come in.

Recurrent networks accept some level of input from themselves. The result of one run of the program can depend on the result of the previous run. And the weighting of that result as input is one of the adjustments we can make, so we can attempt to change the output of the loop as a whole by changing how much influence the previous result has.

It's like an infinite chain of programs stuck together, and while we're getting a result from each individual one, we're also feeding that back into another copy of the program.

A network of networks, connected together.

So essentially, what this network does is output a single letter - the single most likely letter to come next, given the letter that came before it as input - and before that, and before that, and before that too. It knows which letters are likely because it was trained to minimise the end result's difference from the text on this blog.

For example, it might see that in that text full stops appear in two places - sometimes in web addresses, and sometimes at the end of sentences. If it was just looking at one input and using overall likelihood, it'd get this wrong a lot - but it doesn't.

It knows that, perhaps given the previous characters contained a space, or something else non-address-y, that what comes after this full stop probably isn't a 'c' (to be followed by an 'o' and an 'm'), but another space. And as well as outputting that space as a result, it also passes it along. The next neural network along will know that a space is likely to be followed by a capital letter, given there was a full stop two characters back. Basically, it doesn't just know how to write letters - it knows how to write letters in context. It knows that it's not in the middle of 'writing a web address', because statistically they don't have spaces.

At least I assume that's how it does it - one of the weird things about neural nets is that you're never quite sure how they arrive at the conclusion they did, only that they work.

But even though it understands context, this is still a pretty simple network. It's only two layers deep, with a few thousand units. Which is why it's produced writing that looks and acts and feels a lot like mine, but that doesn't mean anything. Intuiting actual meaning from context is probably about a bazillion layers deeper - Google or Facebook might get close, but not this dinky little thing I ran on my laptop.

Why would you do that?

Cheers to Morgan for advice, links and fact-checking.

0 comments

Thursday, 12 January 2017

Posted at 04:28PM under Fun With Neural Networks.

Neurally and a probably can for a many.

This post was written by a neural network. It was see the see the saying the I did find of puts of see the real for a difficies the people of the real person.

We were reason the really side in the week a next the contant and you sure it out the bunch of the traying I really a end and though, he was the same places that has all that the media pretty free put the part of two his modition which is a game and some thing the other and get the rest awesome.

The very decided to digning a lot of us the other care some of the computer the thing up the thought in the more how the side the print there all the days, in the such my better from come hours of the single had a huge in the same and less a break of the most offge the back has the Barton. I did and his guy here a long the reason, so the other probably old the same the most is the some from because the come of the world me to me created with the tame it. There was with a put the musay which is the flassical day because with a reserving. The first the talky seems the only on any test with a most graditions interesting it me.

And you had to do in a granging the day is a long the thing the same run of a realised the tame which is the subschology. I see an all the rang and meass we don't get being hards and its my did the sitter and hurred time thing the officult and a grive in in what the can some going the sand out of mays the The the parts of still have to like across the most and people of the guy to my say and then the same through the second as easy of the time it on the most of the packed and been at the carring is every good the fact the considelity interesting a why it was a little most like something when the some and see to pretty show the resally, and we said it was looks and people realised the Rone (elso for the back that decided the dishand and drink on the than having in the round in the weaders that I do that they all the time it.

In Christ thing Score 10, I have the dessing the starts like the conscan which is metres and sure of the real pricition when I want to be some way that like it was of the story find of the fact which is awesome on a particular history than I have not something the symit is a little really any accerned and see the most point where I'm need, and the dune, and there we see the talking now the most people in my ramen my bare that my fair between something a thought and decided to me the same be a side this which still cars with the dession is the last perfectly and doesn't the mounters and the way the cool you can shoppy way to the first doing and the best fleating and progranse much of the best they just probably it one.

They are. I think. I think it is so it dumptions. Un, 12. The cliend and the game is in first under at seaving in anywhere.

I put it was a slice to be mandless have to quite of the mummore is been some getting between the time clays that's do anything one weird, there kind of day as awesome with the sauding coolly something I can plot it.

The straight their tend the constrold up because the day for the camera some in the train the most with a stundt and thing in the holidays don't want that the middle of the best for the call the feel of a really a guide first. It was not the sause of the one time the current down the right on away the onder the control mame side of the probably cool.

Probably Australia place stuff with the friends and really don't think the since the really being something by the sand it, in the stand. I think the first set of the guys as a lot of the pale of the movie in the lot of Geoger movie is make draws the floor (in the only could be any guy of because the most and one particularly much over the carrian the crised that an and expect off the fact in some about the most least cool thing my just almost the find of me tanded a had [graip when you had more like those in in now they're through and in the whole need to do the mild of the most in its on the like the some that it's any standal in a bit maked with a really going to get a lot of the one in the restem in a mark of the seat.

And it and was in the way the change the can can funny in the might like the editing the day.


What was all that about?

3 comments

Thursday, 29 December 2016

Posted at 10:33AM under IRL.

Looking back on 2016

Get it? Looking back? Because selfie cameras... look... back?

Here are some things you will see in my selfies for this year.

You will see me testing a new phone - underwater.

underwater

You will see me taking my gloriously generic pseudonymous profile picture at the end of Busselton Jetty.

avatar

And you will see me drinking quite a lot of progressively more interesting beer.

untappd

You will see me making fun of my new work uniform, and making fun of myself in Ikea.

uniform ikea

You'll see us going to watch our teams play, and to going King's Park on Valentines' day, and going out in the rain on a whim.

football valentines rain

You'll see me in Seabird, Serpentine, Bindoon, Prevelly, Lancelin and on Rottnest.

seabird serpentine bindoon prevelly lancelin rottnest

You might even occasionally see me taking an exceptionally poorly invigilated exam.

exam

Here are some things that you won't see.

You won't see increasingly divisive politics take hold all around the world. You won't see innocent people locked up or brutally murdered. You won't see artists and scientists dropping like flies.

You won't see a thousand other terrible things, because we don't take selfies when we're sad. Call it optimism, call it selection bias, call it whatever you want. That's all I've got.

Here's to finding the bright parts of 2017.

2 comments

Friday, 02 December 2016

Posted at 02:48PM under Movies Wot I Have Seen..

The Medium Was The Message

These are some thoughts on Arrival. Warning: spoilers abound.

Cheers to Jess, Morgan and Haydn for helping to form many of the ideas in this review. Seriously - see this movie, and take some friends. You won't be disappointed.


If you haven't seen Arrival, and you don't think you're likely to, the premise is this: Aliens ("Heptapods") arrive on Earth, and we have no idea why they're here. Louise, a linguist, leads the translation effort involved in first contact, and as she learns their language she realises its non-linear nature is rewiring her brain to perceive time differently. By the end of the film, she's able to see into her own future and prevent a crisis. It turns out that teaching us this language is the Heptapods' entire purpose on Earth, because they need our help - in thousands of years from now.

arrival

1. Language

The most interesting thing about Arrival is how plausible it all is. You know, apart from the whole massive alien monoliths thing.

The Sapir-Whorf hypothesis, that language shapes the way we think, is a real thing. While linguists like to argue about how strong the effect is, there's a lot of evidence that it actually happens. Moreover, there's some evidence that specifically the metaphors we use to talk about time change the way we see it.

I'm less qualified to talk about the physics, but the fact that most of physics doesn't seem to care about time might go some way to explain what happens in the story, if not in reality.

This is straight up good science fiction. It takes concepts we're familiar with, science that we know, and spins it off into the unknown. Like all good SF, it asks "what if?",

The interesting thing is this: we see the effects of highly advanced physics all the time in science fiction. What we don't see so often is the effects of highly advanced culture - and the merging of the two. Why should creatures with the technology to cross whatever gulf separates them from us (time, space, or dimension) be limited by puny human concepts of what language can and can't be?

first contact

2. Anatomy

Heptapod also makes sense from another perspective - their anatomy matches the properties of their language. The way that humans communicate is tied to the organs we use to do it. When we speak, we can only say one thing at a time, and we have to serialise our ideas before we transmit them. Sounds, words and sentences are all inextricably tied to time, and have to be experienced in correct order for them to make sense. When we sign, we have the same limitations. One sign at a time, deployed in order.

Our written language developed out of our spoken ones, because we speak first. There's specialised hardware and (maybe) software installed on humans to make speaking possible. There's no such specialisation for writing. So when we write, it's as a representation of the way we speak. It's serialised, sequential, and temporal.

Heptapods have no such restrictions, at least not with their writing. The equipment to write is built into their bodies, and the ability to perceive it doesn't depend on receiving it in order. We might imagine cuttlefish, with the chromatophores in their skin, developing a similar way of communicating, transmitting entire complex concepts at once without the need to do things one bit at a time.

Or hey - we do some of this as humans too. Not in our writing, which is so tied to speech, but in the purely visual forms of communication. Art or photography, maybe, but what comes closest is graphs. A graph can show you many data points, multiple variables, and the relationships between all of them all at once, and it does it in a way that's highly conventional. Photos and paintings leave much interpretation up to the viewer, but a language (and a graph) has agreed upon meanings and symbols (like words, or an axis) that make them communicate much the same concept to everyone.

In fact, I think the experience of reading Heptapod would be a lot like reading highly conventionalised, very abstract graphs. Instead of showing data points and the relationships between them, they show concepts and the relationships between them. Whether or not that would give you the ability to see through time is probably something you should ask a mathematician.

heptapod language

3. The Gift

And this is the really interesting part of Arrival for me. We leave Louise right as she's becoming conversant in Heptapod. She has the basics of the language down, but she really isn't truly fluent. The events of the film are resolved, but their consequences aren't.

How much will her ability to see through time grow? What are the nature of her predictions? Can she act to change them, or are they set? Does she see probable timelines, or fixed ones? As she teaches the language to others (as she's shown to be doing), how do their gifts interact? Can she see other people's futures, or just her own? What about changing others' futures? How does human society change when prediction like this is commonplace? Economics? Society? Relationships?

Or, as I like to think, does becoming fluent in a language outside of time render all of those questions meaningless? Does she become so fluent in time that she navigates all that as deftly and unconsciously as we navigate the grammar of our native language, unaware of its rules and yet still following them perfectly?

Are we, ironically, only struggling with these questions because our language doesn't allow us to frame the world as hers now does? Is being bound to time an essential part of the human experience?

And once we don't have to struggle with those questions, to ponder what our past means and our future holds, are we still human? And if we're not, are we caterpillars not understanding how butterflies fly, or locals not comprehending a colonising force?

4. Arrival.

This is why Arrival is such an amazing film, and one that you should see if you haven't already. The film is intense, and well shot, with a beautiful soundtrack - but the best part of the movie is the questions it leaves you with, and the literal hours of discussion that those questions spark. This is the mark of brilliant science fiction, and I wish we had more of it hitting traditional cinemas than we do.

1 comments

< Previous

Archives