Spent some time looking at DynamoDB from Amazon. According to this link, a beta tester was able to load multiple terabytes of data at 250K writes per second maintained for 3 days. And average read latency close to 2ms and 99th percentile 6-8ms.
If these numbers hold up, this is a game changing technology.
The DyanmoDB range table type seemed a bit wonky until I realized it's just a one-to-many table with the primary key being the foreign key (of the 'one' table), and the ability to sort on one attribute. The other table type is just a simple hash.
The model is a bit limited, but still very useful. For many applications, a hybrid approach with standard SQL tables will still be needed. But lot's of heavily used things can be put in DynamoDB. Particularly data related to users. The range table is perfect for this type of data, since it's a one-to-many relationship (of users to things).
I think a killer application for DynamoDB is an object persistence layer. Think of an ORM without the R. It won't have SQL like querying capability because of the limitations of the data model. But many things are doable. Direct object pointers are straightforward, there are sets of strings and numbers, which can be translated to sets of objects, and the range table can be exploited in a number of ways.
In other words, many real world object relationships can be easily modeled with DynamoDB.