02 Nov 2014
So I've been investigating Docker recently, and its pretty great.
One cool feature which has been particularly useful for me is mounting a directory on the host machine inside the docker container.
It is great for a dev environment to auto-reload new changes without building a new container or restarting a container.
If you are running an app which auto-reloads you can run a file in a mounted folder, and edit the files locally (on the docker host) and see them reflected in the running containter. Very handy for scripting languages (like Ruby or PHP) as well as cross platform languages (like Java).
I'll go through how I have been doing it for a Sinatra app running with Shotgun (auto-reloader for rackup).
First you need an image with both Sinatra and Shotgun.
I pushed one here, but if you prefer to make your own it was just this:
docker run training/sinatra run gem install shotgun
Now to run it so it auto-reloads from your local folder run a command like this:
docker run -d -p {port-on-host}:9393 -v {absolute-local-path}:/app icchan/shotgun-sinatra shotgun --host 0.0.0.0 /app/{your-app}.rb
So heres a breakdown:
-d
means run as daemon
-p {host-port-to-expose}:9393
exposes and publishes port 9393 (shotgun default port) to your host
-v {absolute-local-path}:/app
mounts the local folder to /app in the container
icchan/shotgun-sinatra
is the name of the shotgun image
shotgun --host 0.0.0.0 /app/{your-app}.rb
is the command to run on the image, the 0.0.0.0 is to make accessible outside the container (default is 127.0.0.1)
So if your app is in /home/ruby/app/hello.rb
on your host and you want to run it on 8080 you would run this command:
docker run -d -p 8080:9393 -v /home/ruby/app:/app icchan/shotgun-sinatra shotgun --host 0.0.0.0 /app/hello.rb
It will now be accessible at port 8080 on your host. If you are using boot2docker its probably http://192.168.59.103:8080
.
Now edit the file /home/ruby/app/hello.rb
on your host machine, the changes will automagically be reflected in the running app :)
18 Oct 2014
This week at work I had to build a pretty easy geospatial query api with go and mongodb, with some help from mgo MongoDB driver.
MongoDB has changed a lot since I last used it, so I had to learn how to use it again.
In short it now uses this thing called GeoJSON which looks like this:
"location" : {
"type" : "Point",
"coordinates" : [ 151.20699, -33.867487 ]
}
It can actually represent a lot of other things like lines and polygons, but we are only interested in single points today,
Enough background, lets do this.
This guide was written using Go v1.3 and MongoDB v2.6.5 and assumes you have go installed and also mongodb somewhere.
To do a geospatial query in mongodb for using longitude and latutide of places on Earth you'll probably want to use a "2dphere" index.
So lets insert some documents into a mongodb and set a "2dsphere" index, type this in your mongo console:
db.shops.insert({ "name" : "Shop in Sydney", "location" : { "type" : "Point", "coordinates" : [ 151.20699, -33.867487 ] } })
db.shops.insert({ "name" : "Studio Alta", "location" : { "type" : "Point", "coordinates" : [ 139.7011064, 35.692474 ] } })
db.shops.insert({ "name" : "ビックロ", "location" : { "type" : "Point", "coordinates" : [ 139.70328368, 35.69146649 ] } })
db.shops.insert({ "name" : "Keio Plaza Hotel", "location" : { "type" : "Point", "coordinates" : [ 139.69489306, 35.68999278 ] } })
db.shops.insert({ "name" : "明治神宮", "location" : { "type" : "Point", "coordinates" : [ 139.69936833, 35.67612138 ] } })
db.shops.insert({ "name" : "Hachiko", "location" : { "type" : "Point", "coordinates" : [ 139.7005894, 35.65905387 ] } })
db.shops.insert({ "name" : "Haneda Airport", "location" : { "type" : "Point", "coordinates" : [ 139.78569388, 35.54958159 ] } })
db.shops.ensureIndex({location:"2dsphere"})
This is just a bunch of places around Tokyo with GeoJSON coordinates. The last line is the important one as it sets the geospatial index.
Try out a geospatial query in the mongo console like this:
db.shops.find({ location: { $nearSphere: { $geometry: { type: "Point", coordinates: [139.701642, 35.690647], }, $maxDistance : 50000 } } } )
This queries for the shops near a point in Shinjuku, Tokyo up to 5km away, sorted by distance. You should see all the shops except for "Shop in Sydney" (which is much more than 5km away)
Lets write some code, if you don't have mgo yet you can get it with this:
If you'd rather read code than words, you can view the full source here:
https://gist.github.com/icchan/bd42095afd8305594778
First create some structs to hold the data:
type ShopLocation struct {
ID bson.ObjectId `bson:"_id,omitempty" json:"shopid"`
Name string `bson:"name" json:"name"`
Location GeoJson `bson:"location" json:"location"`
}
type GeoJson struct {
Type string `json:"-"`
Coordinates []float64 `json:"coordinates"`
}
Then get a mongodb session however you normally do it. Probably something like this:
cluster := "localhost" // mongodb host
// connect to mongo
session, err := mgo.Dial(cluster)
if err != nil {
log.Fatal("could not connect to db: ", err)
panic(err)
}
defer session.Close()
Now do your query, note the query is basically the same as in the console but using a bson go struct.
// search criteria
long := 139.701642
lat := 35.690647
scope := 3000 // max distance in metres
var results []ShopLocation // to hold the results
// query the database
c := session.DB("test").C("shops")
err = c.Find(bson.M{
"location": bson.M{
"$nearSphere": bson.M{
"$geometry": bson.M{
"type": "Point",
"coordinates": []float64{long, lat},
},
"$maxDistance": scope,
},
},
}).All(&results)
If you run the gist from above, you should see an output like this:
[
{
"shopid": "544215c13ef9cb393418ea25",
"name": "ビックロ",
"location": {
"coordinates": [
139.70328368,
35.69146649
]
}
},
{
"shopid": "544215c13ef9cb393418ea24",
"name": "Studio Alta",
"location": {
"coordinates": [
139.7011064,
35.692474
]
}
},
{
"shopid": "544215c13ef9cb393418ea26",
"name": "Keio Plaza Hotel",
"location": {
"coordinates": [
139.69489306,
35.68999278
]
}
},
{
"shopid": "544215c13ef9cb393418ea27",
"name": "明治神宮",
"location": {
"coordinates": [
139.69936833,
35.67612138
]
}
}
]
22 Apr 2011
Recently I've had to deal with Japanese character validation a fair bit, and its a fair bit tricker than dealing with the usual checking that an input only contains letters, numbers and underscores.
A pretty standard white list validation would be something like:
preg_match("/^[a-zA-Z0-9]+$/",$input)
Which says all the characters from the start to the end are either letters or numbers. Of course this doesn't work in Japanese.
First thing you need to do is use the /u option at the end to enable unicode matching. Next you need to put the Japanese characters inside the square brackets [].
To do this we use something like this \x{####} where #### is the unicode for the character. You can look up the codes on the internet somewhere like this.
So if you want hiragana you would use something like: \x{3041}-\x{3096}
for katakana you would use: \x{30a1}-\x{30fc}
For common kanji use: \x{4e00}-\x{9faf}
For uncommon kanji use: \x{3400}-\x{4dbf}
So finally, if we want to white list letters, numbers, hiragana,katakana and common kanji we could do something like:
preg_match("/^[a-zA-Z0-9\x{3041}-\x{3096}\x{30a1}-\x{30fc}\x{4e00}-\x{9faf}]+$/u",$input)
You can also add underscores or dashes or whatever suits your needs.
21 Apr 2011
So at my new job I've been tasked with migrating out awfully designed Postgres database into MongoDB, while at the same time migrating our PHP/CodeIgniter REST service to Java/SpringMVC.
Our data is a deep/wide object graph with a complex and variable set of properties, so it is actually well suited to NoSQL or a document database or whatever you wanna call it.
Since I'm migrating a system and not creating a new one I had a few constraints. The first one I'm going to tackle is that we need to have auto incremented integer primary keys for our entities. And MongoDB has UUIDs by default, so I had to find another way.
I found the solution
here, and it was all pretty easy. If you are too lazy to read it, it basically says to keep a separate collection to store your sequences (kinda like Postgres does automatically with its sequences).
I tested this out in the admin console and it was all great. But the Java driver is not quite as straightforward as the console, its not just JSON, it has lots of new Objects like BSONCallback, BasicDBObject and MongoOptions which don't make much sense to n00bs like me...
It takes a while to get your head around. Since I haven't been able to find a good example anywhere online on how to do this using the Java driver, I'll post my code here. I hope it helps someone out there.
/**
* Get the next unique ID for a named sequence.
* @param db Mongo database to work with
* @param seq_name The name of your sequence (I name mine after my collections)
* @return The next ID
*/
public static String getNextId(DB db, String seq_name) {
String sequence_collection = "seq"; // the name of the sequence collection
String sequence_field = "seq"; // the name of the field which holds the sequence
DBCollection seq = db.getCollection(sequence_collection); // get the collection (this will create it if needed)
// this object represents your "query", its analogous to a WHERE clause in SQL
DBObject query = new BasicDBObject();
query.put("_id", seq_name); // where _id = the input sequence name
// this object represents the "update" or the SET blah=blah in SQL
DBObject change = new BasicDBObject(sequence_field, 1);
DBObject update = new BasicDBObject("$inc", change); // the $inc here is a mongodb command for increment
// Atomically updates the sequence field and returns the value for you
DBObject res = seq.findAndModify(query, new BasicDBObject(), new BasicDBObject(), false, update, true, true);
return res.get(sequence_field).toString();
}