Write Templates Like A Node.js Pro: Handlebars Tutorial

I’ve wrote how I struggled with Jade, but I had no choice except to master it. However, before beginning to understand Jade, I admired Handlebars GREATLY. I did it mostly for its simplicity and similarity with plain HTML.

If you want to write templates for Node.js apps, then consider Handlebars. This short tutorial will get you started on the path of becoming a pro. And if you haven’t even heard about Handlebars, then you’re missing out big time!

Here’s the outline of this post:

  • Handlebars syntax
  • Handlebars standalone usage

Continue reading “Write Templates Like A Node.js Pro: Handlebars Tutorial”

NodeConf 2013

I’m just back from NodeConf 2013 summer camp at Walker Creek Ranch in Petaluma which is in Marin County, California just a half-hour north of San Francisco.

I’m just back from NodeConf 2013 summer camp at Walker Creek Ranch in Petaluma which is in Marin County, California just a half-hour north of San Francisco.

Continue reading “NodeConf 2013”

Nested Objects in Mongoose

There is a certain magic in ORMs like Mongoose. I learned it the hard way (as usual!), when I was trying to iterate over nested object’s properties…

There is a certain magic in ORMs like Mongoose. I learned it the hard way (as usual!), when I was trying to iterate over nested object’s properties. For example, here is a schema with a nested object features defines like this:

var User = module.exports = new Schema({
  features: { 
    realtime_updates: {
      type: Boolean
    },
    storylock: {
      type: Boolean
    },
    custom_embed_style: {
      type: Boolean
    },
    private_stories: {
      type: Boolean
    },
    headerless_embed:{
      type: Boolean
    }
};

Let’s say I want to overwrite object features_enabled with these properties:

if (this.features) { 
  for (var k in this.features) {
    features_enabled[k] = this.features[k];
  }
}
console.log(features_enabled)
return features_enabled;

Not so fast, I was getting a lot of system properties specific to Mongoose. Instead we need to use toObject(), e.g.:

if (this.features.toObject()) { 
  for (var k in this.features.toObject()) {
    console.log('!',k)
    features_enabled[k] = this.features.toObject()[k];
  }
}

Remember rule number one, computer is always right. If we think that it’s wrong — look up the rule number one. :-)

Node.js OAuth1.0 and OAuth2.0: Twitter API v1.1 Examples

Recently we had to work on modification to accommodate Twitter API v1.1. The main difference between Twitter API v1.1 and, soon to be deprecated, Twitter API v1.0 is that most of the REST API endpoints now require user or application context. In other words, each call needs to be performed via OAuth 1.0A or OAuth 2.0 authentication.

Recently we had to work on modification to accommodate Twitter API v1.1. The main difference between Twitter API v1.1 and, soon to be deprecated, Twitter API v1.0 is that most of the REST API endpoints now require user or application context. In other words, each call needs to be performed via OAuth 1.0A or OAuth 2.0 authentication.

Node.js OAuth
Node.js OAuth

At Storify we run everything on Node.js so it was natural that we used oauth module by Ciaran Jessup: NPM and GitHub. It’s mature and supports all the needed functionality but lacks any kind of examples and/or interface documentation.

Here are the examples of calling Twitter API v1.1, and a list of methods. I hope that nobody will have to dig through the oauth module source code anymore!

OAuth 1.0

Let start with a good old OAuth 1.0A. You’ll need four values to make this type of a request to Twitter API v1.1 (or any other service):

  1. Your Twitter application key, a.k.a., consumer key
  2. Your Twitter secret key
  3. User token for your app
  4. User secret for your app

All four of them can be obtained for your own apps at dev.twitter.com. In case that the user is not youself, you’ll need to perform 3-legged OAuth, or Sign in with Twitter, or something else.

Next we create oauth object with parameters, and call get() function to fetch a secured resource. Behind the scene get() function constructs unique values for the request header — Authorization header. The method encrypts URL, timestamp, application and other information in a signature. So the same header won’t work for another URL or after a specific time window.

var OAuth = require('OAuth');
var oauth = new OAuth.OAuth(
      'https://api.twitter.com/oauth/request_token',
      'https://api.twitter.com/oauth/access_token',
      'your Twitter application consumer key',
      'your Twitter application secret',
      '1.0A',
      null,
      'HMAC-SHA1'
    );
    oauth.get(
      'https://api.twitter.com/1.1/trends/place.json?id=23424977',
      'your user token for this app', 
      //you can get it at dev.twitter.com for your own apps
      'your user secret for this app', 
      //you can get it at dev.twitter.com for your own apps
      function (e, data, res){
        if (e) console.error(e);        
        console.log(require('util').inspect(data));
        done();      
      });    
});

OAuth Echo

OAuth Echo is similar to OAuth 1.0. If you’re a Delegator (service to which requests to Service Provider are delegated by Consumer) all you need to do is just pass the value of x-verify-credentials-authorization header to the Service Provider in Authorization header. Twitter has a good graphics on OAuth Echo.

There is OAuthEcho object which inherits must of its methods from normal OAuth class. In case if you want to write Consumer code (or for functional tests, in our case Storify is the delegator) and you need x-verify-credentials-authorization/Authorization header values, there is a authHeader method. If we look at it, we can easily reconstruct the headers with internal methods of oauth module such as _prepareParameters() and _buildAuthorizationHeaders(). Here is a function that will give us required values based on URL (remember that URL is a part of Authorization header):

  function getEchoAuth(url) { 
  //helper to construct echo/oauth headers from URL
    var oauth = new OAuth('https://api.twitter.com/oauth/request_token',
      'https://api.twitter.com/oauth/access_token',
      "AAAAAAAAAAAAAAAAAAAA",
      //test app token
      "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB", 
      //test app secret
    '1.0A',
    null,
      'HMAC-SHA1');
    var orderedParams = oauth._prepareParameters(
      "1111111111-AAAAAA", //test user token
    "AAAAAAAAAAAAAAAAAAAAAAA", //test user secret
    "GET",
    url
    );
    return oauth._buildAuthorizationHeaders(orderedParams);
  }

From your consumer code you can maker request with superagent or other http client library (e.g., node.js core http module’s http.request):

var request = require('super agent');

request.post('your delegator api url')
  .send({...}) 	
  //your json data
  .set(
    'x-auth-service-provider',
    'https://api.twitter.com/1.1/account/verify_credentials.json')
  .set(
    'x-verify-credentials-authorization',
    getEchoAuth("https://api.twitter.com/1.1/account/verify_credentials.json"))
  .end(function(res){console.log(res.body)});

OAuth2

OAuth 2.0 is a breeze to use comparing to the other authentication methods. Some argue that it’s not as secure, so make sure that you use SSL and HTTPS for all requests.

 var OAuth2 = OAuth.OAuth2;    
 var twitterConsumerKey = 'your key';
 var twitterConsumerSecret = 'your secret';
 var oauth2 = new OAuth2(
   twitterconsumerKey,
   twitterConsumerSecret, 
   'https://api.twitter.com/', 
   null,
   'oauth2/token', 
   null);
 oauth2.getOAuthAccessToken(
   '',
   {'grant_type':'client_credentials'},
   function (e, access_token, refresh_token, results){
     console.log('bearer: ',access_token);
     oauth2.get('protected url', 
       access_token, function(e,data,res) {
         if (e) return callback(e, null);
         if (res.statusCode!=200) 
           return callback(new Error(
             'OAuth2 request failed: '+
             res.statusCode),null);
         try {
           data = JSON.parse(data);        
         }
         catch (e){
           return callback(e, null);
         }
         return callback(e, data);
      });
   });

Please note the JSON.parse() function, oauth module returns string, not a JavaScript object.

Consumers of OAuth2 don’t need to fetch the bearer/access token for every request. It’s okay to do it once and save value in the database. Therefore, we can make requests to protected resources (i.e. Twitter API v.1.1) with only one secret password. For more information check out Twitter application only auth.

Node.js oauth API

Node.js oauth OAuth

oauth.OAuth()

Parameters:

  • requestUrl
  • accessUrl
  • consumerKey
  • consumerSecret
  • version
  • authorize_callback
  • signatureMethod
  • nonceSize
  • customHeaders

Node.js oauth OAuthEcho

oauth.OAuthEcho()

Parameters:

  • realm
  • verify_credentials
  • consumerKey
  • consumerSecret
  • version
  • signatureMethod
  • nonceSize
  • customHeaders

OAuthEcho sharers the same methods as OAuth

Node.js oauth Methods

Secure HTTP request methods for OAuth and OAuthEcho classes:

OAuth.get()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • callback

OAuth.delete()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • callback

OAuth.put()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • post_body
  • post_content_type
  • callback

OAuth.post()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • post_body
  • post_content_type
  • callback

https://github.com/ciaranj/node-oauth/blob/master/lib/oauth.js

Node.js oauth OAuth2

OAuth2 Class

OAuth2()

Parameters:

  • clientId
  • clientSecret
  • baseSite
  • authorizePath
  • accessTokenPath
  • customHeaders

OAuth2.getOAuthAccessToken()

Parameters:

  • code
  • params
  • callback

OAuth2.get()

Parameters:

  • url
  • access_token
  • callback

https://github.com/ciaranj/node-oauth/blob/master/lib/oauth2.js

The authors of node.js oauth did a great job but currently there are 32 open pull requests (mine is one of them) and it makes me sad. Please let them know that we care about improving Node.js ecosystem of modules and developers community!

UPDATE: Pull request was successfully merged!

Useful Twitter API v1.1 Resources

Just because they are vast and not always easy to find.

Tools

Intro to Express.js: Simple REST API app with Monk and MongoDB

After looking at Google Analytics stats I’ve realized that there is a demand for short Node.js tutorial and quick start guides. This is an introduction to probably the most popular (as of April 2013) Node.js framework Express.js.

Why?

After looking at Google Analytics stats I’ve realized that there is a demand for short Node.js tutorial and quick start guides. This is an introduction to probably the most popular (as of April 2013) Node.js framework Express.js.

Express.js — Node.js framework
Express.js — Node.js framework

mongoui

This app is a start of mongoui project. A phpMyAdmin counterpart for MongoDB written in Node.js. The goal is to provide a module with a nice web admin user interface. It will be something like Parse.com, Firebase.com, MongoHQ or MongoLab has but without trying it to any particular service. Why do we have to type db.users.findOne({'_id':ObjectId('...')}) any time we want to look up the user information? The alternative of MongoHub mac app is nice (and free) but clunky to use and not web based.

REST API app with Express.js and Monk

Ruby enthusiasts like to compare Express to Sinatra framework. It’s similarly flexible in the way how developers can build there apps. Application routes are set up in a similar manner, i.e., app.get('/products/:id', showProduct);. Currently Express.js is at version number 3.1. In addition to Express we’ll use Monk module.

We’ll use Node Package Manager which is usually come with a Node.js installation. If you don’t have it already you can get it at npmjs.org.

Create a new folder and NPM configuration file, package.json, in it with the following content:

{
  "name": "mongoui",
  "version": "0.0.1",
  "engines": {
    "node": ">= v0.6"
  },
  "dependencies": {
    "mongodb":"1.2.14",
    "monk": "0.7.1",
    "express": "3.1.0"
  }
}

Now run npm install to download and install modules into node_module folder. If everything went okay you’ll see bunch of folders in node_modules folders. All the code for our application will be in one file, index.js, to keep it simple stupid:

var mongo = require('mongodb');
var express = require('express');
var monk = require('monk');
var db =  monk('localhost:27017/test');
var app = new express();

app.use(express.static(__dirname + '/public'));
app.get('/',function(req,res){
  db.driver.admin.listDatabases(function(e,dbs){
      res.json(dbs);
  });
});
app.get('/collections',function(req,res){
  db.driver.collectionNames(function(e,names){
    res.json(names);
  })
});
app.get('/collections/:name',function(req,res){
  var collection = db.get(req.params.name);
  collection.find({},{limit:20},function(e,docs){
    res.json(docs);
  })
});
app.listen(3000)

Let break down the code piece by piece. Module declaration:

var mongo = require('mongodb');
var express = require('express');
var monk = require('monk');

Database and Express application instantiation:

var db =  monk('localhost:27017/test');
var app = new express();

Tell Express application to load and server static files (if there any) from public folder:

app.use(express.static(__dirname + '/public'));

Home page, a.k.a. root route, set up:

app.get('/',function(req,res){
  db.driver.admin.listDatabases(function(e,dbs){
      res.json(dbs);
  });
});

get() function just takes two parameters: string and function. The string can have slashes and colons, for example product/:id. The function must have two parapemets request and response. Request has all the information like query string parameters, session, headers and response is an object to with we output the results. In this case we do it by calling res.json() function. db.driver.admin.listDatabases() as you might guess give us a list of databases in async manner.

Two other routes are set up in a similar manner with get() function:

app.get('/collections',function(req,res){
  db.driver.collectionNames(function(e,names){
    res.json(names);
  })
});
app.get('/collections/:name',function(req,res){
  var collection = db.get(req.params.name);
  collection.find({},{limit:20},function(e,docs){
    res.json(docs);
  })
});

Express conveniently supports other HTTP verbs like post and update. In the case of setting up a post route we write this:

app.post('product/:id',function(req,res) {...});

Express also has support for middeware. Middleware is just a request function handler with three parameters: request, response, and next. For example:

app.post('product/:id', authenticateUser, validateProduct, addProduct);

function authenticateUser(req,res, next) {
  //check req.session for authentication
  next();
}

function validateProduct (req, res, next) {
   //validate submitted data
   next();
}

function addProduct (req, res) {
  //save data to database
}

validateProduct and authenticateProduct are middleware. They are usually put into separate file (or files) in a big projects.

Another way to set up middle ware in Express application is to use use() function. For example earlier we did this for static assets:

app.use(express.static(__dirname + '/public'));

We can also do it for error handlers:

app.use(errorHandler);

Assuming you have mongoDB installed this app will connect to it (localhost:27017) and display collection name and items in collections. To start mongo server:

$ mongod

to run app (keep the mongod terminal window open):

$ node .

or

$ node index.js

To see the app working, open http://localhost:3000 in Chrome with JSONViewer extension (to render JSON nicely).

Tom Hanks' The Polar Express
Tom Hanks’ The Polar Express

Test-Driven Development in Node.js With Mocha

Don’t waste time writing tests for throwaway scripts, but please adapt the habit of Test-Driven Development for the main code base. With a little time spent in the beginning, you and your team will save time later and have confidence when rolling out new releases. Test Driven Development is a really really really good thing.

Who needs Test-Driven Development?

Imagine that you need to implement a complex feature on top of an existing interface, e.g., a ‘like’ button on a comment. Without tests you’ll have to manually create a user, log in, create a post, create a different user, log in with a different user and like the post. Tiresome? What if you’ll need to do it 10 or 20 times to find and fix some nasty bug? What if your feature breaks existing functionality, but you notice it 6 months after the release because there was no test!

Mocha: simple, flexible, fun
Mocha: simple, flexible, fun

Don’t waste time writing tests for throwaway scripts, but please adapt the habit of Test-Driven Development for the main code base. With a little time spent in the beginning, you and your team will save time later and have confidence when rolling out new releases. Test Driven Development is a really really really good thing.

Quick Start Guide

Follow this quick guide to set up your Test-Driven Development process in Node.js with Mocha.

Install Mocha globally by executing this command:

$ sudo npm install -g mocha

We’ll also use two libraries, Superagent and expect.js by LeanBoost. To install them fire up npm commands in your project folder like this:

$ npm install superagent
$ npm install expect.js   

Open a new file with .js extension and type:

var request = require('superagent');
var expect = require('expect.js');

So far we’ve included two libraries. The structure of the test suite going to look like this:

describe('Suite one', function(){
  it(function(done){
  ...
  });
  it(function(done){
  ...
  });
});
describe('Suite two', function(){
  it(function(done){
  ...
  });
});

Inside of this closure we can write request to our server which should be running at localhost:8080:

...
it (function(done){
  request.post('localhost:8080').end(function(res){
    //TODO check that response is okay
  });
});
...

Expect will give us handy functions to check any condition we can think of:

...
expect(res).to.exist;
expect(res.status).to.equal(200);
expect(res.body).to.contain('world');
...

Lastly, we need to add done() call to notify Mocha that asynchronous test has finished its work. And the full code of our first test looks like this:

var request = require('superagent');
var expect = require('expect.js');
  
describe('Suite one', function(){
 it (function(done){
   request.post('localhost:8080').end(function(res){
    expect(res).to.exist;
    expect(res.status).to.equal(200);
    expect(res.body).to.contain('world');
    done();
   });
  });
});

If we want to get fancy, we can add before and beforeEach hooks which will, according to their names, execute once before the test (or suite) or each time before the test (or suite):

before(function(){
  //TODO seed the database
});
describe('suite one ',function(){
  beforeEach(function(){
    //todo log in test user
  });
  it('test one', function(done){
  ...
  });
});

Note that before and beforeEach can be placed inside or outside of describe construction.

To run our test simply execute:

$ mocha test.js

To use different report type:

$ mocha test.js -R list
$ mocha test.js -R spec

Asynchronicity in Node.js

One of the biggest advantages of using Node.js over Python or Ruby is that Node has a non-blocking I/O mechanism. To illustrate this let me use an example of a line in a Starbucks coffeeshop. Let’s pretend that each person standing in line for a drink is a task, and everything behind the counter — cashier, register, barista — is a server or server application. When we order a cup of regular drip coffee, like Pike, or hot tea, like Earl Grey, the barista makes it. While the whole line waits while that drink is made, and the person is charged the appropriate amount…

Non-Blocking I/O

One of the biggest advantages of using Node.js over Python or Ruby is that Node has a non-blocking I/O mechanism. To illustrate this, let me use an example of a line in a Starbucks coffee shop. Let’s pretend that each person standing in line for a drink is a task, and everything behind the counter — cashier, register, barista — is a server or server application. When we order a cup of regular drip coffee, like Pike, or hot tea, like Earl Grey, the barista makes it. The whole line waits while that drink is made, and the person is charged the appropriate amount.

Asynchronicity in Node.js
Asynchronicity in Node.js

Of course, we know that these kinds of drinks are easy to make; just pour the liquid and it’s done. But what about those fancy choco-mocha-frappe-latte-soy-decafs? What if everybody in line decides to order these time-consuming drinks? The line will be held up by each order, and it will grow longer and longer. The manager of the coffee shop will have to add more registers and put more baristas to work (or even stand behind the register him/herself). This is not good, right? But this is how virtually all server-side technologies work, except Node. Node is like a real Starbucks. When you order something, the barista yells the order to the other employee, and you leave the register. Another person gives their order while you wait for your state-of-the-art eye-opener in a paper cup. The line moves, the processes are executed asynchronously and without blocking the queue by waiting.

This is why Node.js blows everything else away (except maybe low-level C/C++) in terms of performance and scalability. With Node, you just don’t need that many CPUs and servers to handle the load.

Asynchronous Way of Coding

Asynchronicity requires a different way of thinking for programmers familiar with Python, PHP, C or Ruby. It’s easy to introduce a bug unintentionally by forgetting to end the execution of the code with a proper return expression.

Here is a simple example illustrating this scenario:

var test = function (callback) {
  return callback();  
  console.log('test') //shouldn't be printed
}

var test2 = function(callback){
  callback();
  console.log('test2') //printed 3rd
}

test(function(){
  console.log('callback1') //printed first
  test2(function(){
  console.log('callback2') //printed 2nd
  })
});

If we don’t use return callback() and just use callback() our string test2 will be printed (test is not printed).

callback1
callback2
tes2

For fun I’ve added a setTimeout() delay for the callback2 string, and now the order has changed:

var test = function (callback) {
  return callback();  
  console.log('test') //shouldn't be printed
}

var test2 = function(callback){
  callback();
  console.log('test2') //printed 2nd
}

test(function(){
  console.log('callback1') //printed first
  test2(function(){
    setTimeout(function(){
      console.log('callback2') //printed 3rd
    },100)
  })
});

Prints:

callback1
tes2
callback2

The last example illustrates that the two functions are independent of each other and run in parallel. The faster function will finish sooner than the slower one. Going back to our Starbucks examples, you might get your drink faster than the other person who was in front of you in the line. Better for people, and better for programs! :-)

Decreasing 64-bit Tweet ID in JavaScript

JavaScript is only able to handle integers up to 53-bit in size, here is a script to decrease tweet ID which is a 64-bit number in JavaScript without libraries or recursion, to use with max_id or since_id in Twitter API

As some of you might know, JavaScript is only able to handle integers up to 53-bit in size. This post, Working with large integers in JavaScript (which is a part of Numbers series) does a great job at explaining general concepts on dealing with large numbers in JS.

64-bit Tweet ID is "rounded" in JS
64-bit Tweet ID is “rounded” in JS

I had to do some research on the topic when I was re-writing some JavaScript code responsible for handling Twitter search in Storify editor: we had tweet duplicates in results! In this article, Working with Timelines, Twitter official documentation says:

Environments where a Tweet ID cannot be represented as an integer with 64 bits of precision (such as JavaScript) should skip this step.

So true, because id and id_str fields in a Twitter API response were different. Apparently, JavaScript engine just “rounds” inappropriately large numbers. :-( The task was complicated by the fact that I needed to subtract 1 from the last tweet’s ID to prevent its reappearance in a second search response. After the subtraction I could have easily passed the value to max_id parameter of Twitter API.

I’ve come across different solutions, but decided to write my own function which is simple to understand and not heavy on resources. Here is a script to decrease tweet ID which is a 64-bit number in JavaScript without libraries or recursion, to use with max_id or since_id in Twitter API:

function decStrNum (n) {
    n = n.toString();
    var result=n;
    var i=n.length-1;
    while (i>-1) {
      if (n[i]==="0") {
        result=result.substring(0,i)+"9"+result.substring(i+1);
        i --;
      }
      else {
        result=result.substring(0,i)+(parseInt(n[i],10)-1).toString()+result.substring(i+1);
        return result;
      }
    }
    return result;
}

To check if it works, you can run these logs:

console.log("290904187124985850");
console.log(decStrNum("290904187124985850"));
console.log("290904187124985851");
console.log(decStrNum("290904187124985851"));
console.log("290904187124985800");
console.log(decStrNum("290904187124985800"));
console.log("000000000000000001");
console.log(decStrNum("0000000000000000001"));

Alternative solution which I’ve found in a StackOverflow question was suggested by Bob Lauer, but it involves recursion and IMHO is more complicated:

function decrementHugeNumberBy1(n) {
    // make sure s is a string, as we can't do math on numbers over a certain size
    n = n.toString();
    var allButLast = n.substr(0, n.length - 1);
    var lastNumber = n.substr(n.length - 1);

    if (lastNumber === "0") {
        return decrementHugeNumberBy1(allButLast) + "9";
    }
    else {      
        var finalResult = allButLast + (parseInt(lastNumber, 10) - 1).toString();
        return trimLeft(finalResult, "0");
    }
}

function trimLeft(s, c) {
    var i = 0;
    while (i < s.length && s[i] === c) {
        i++;
    }

    return s.substring(i);
}

Now, if you’re the type of person who likes to shoot sparrows with a howitzer, there are full-blown libraries to handle operations on large numbers in JavaScript; just to name a few: BigInteger, js-numbers and javascript-bignum.

MongoDB migration with Node and Monk

Recently one of our top users complained that their Storify account is unaccessible. We’ve checked the production database and it appeared to be that the account might have been compromised and maliciously deleted by somebody using user’s account credentials. Thanks for a great MongoHQ service we had a backup database in less than 15 minutes.

Recently one of our top users complained that their Storify account was unaccessible. We’ve checked the production database and it appeares to be that the account might have been compromised and maliciously deleted by somebody using user’s account credentials. Thanks to a great MongoHQ service, we had a backup database in less than 15 minutes.
There were two options to proceed with the migration:

  1. Mongo shell script
  2. Node.js program

Because Storify user account deletion involves deletion of all related objects — identities, relationships (followers, subscriptions), likes, stories — we’ve decided to proceed with the latter option. It worked perfectly, and here is a simplified version which you can use as a boilerplate for MongoDB migration (also at gist.github.com/4516139).

Restoring MongoDB Records
Restoring MongoDB Records

Let’s load all the modules we need: Monk, Progress, Async, and MongoDB:

var async = require('async');
var ProgressBar = require('progress');
var monk = require('monk');
var ObjectId=require('mongodb').ObjectID;

By the way, made by LeanBoost, Monk is a tiny layer that provides simple yet substantial usability improvements for MongoDB usage within Node.JS.

Monk takes connection string in the following format:

username:password@dbhost:port/database

So we can create the following objects:

var dest = monk('localhost:27017/storify_localhost');
var backup = monk('localhost:27017/storify_backup');

We need to know the object ID which we want to restore:

var userId = ObjectId(YOUR-OBJECT-ID); 

This is a handy restore function which we can reuse to restore objects from related collections by specifying query (for more on MongoDB queries go to post Querying 20M-Record MongoDB Collection. To call it, just pass a name of the collection as a string, e.g., "stories" and a query which associates objects from this collection with your main object, e.g., {userId:user.id}. The progress bar is needed to show us nice visuals in the terminal.

var restore = function(collection, query, callback){
  console.info('restoring from ' + collection);
  var q = query;
  backup.get(collection).count(q, function(e, n) {
    console.log('found '+n+' '+collection);
    if (e) console.error(e);
    var bar = new ProgressBar('[:bar] :current/:total :percent :etas', { total: n-1, width: 40 })
    var tick = function(e) {
      if (e) {
        console.error(e);
        bar.tick();
      }
      else {
        bar.tick();
      }
      if (bar.complete) {
        console.log();
        console.log('restoring '+collection+' is completed');
        callback();                
      }
    };
    if (n>0){
      console.log('adding '+ n+ ' '+collection);
      backup.get(collection).find(q, { stream: true }).each(function(element) {
        dest.get(collection).insert(element, tick);
      });        
    } else {
      callback();
    }
  });
}

Now we can use async to call the restore function mentioned above:

async.series({
  restoreUser: function(callback){   // import user element
    backup.get('users').find({_id:userId}, { stream: true, limit: 1 }).each(function(user) {
      dest.get('users').insert(user, function(e){
        if (e) {
          console.log(e);
        }
        else {
          console.log('resored user: '+ user.username);
        }
        callback();
      });
    });
  },

  restoreIdentity: function(callback){  
    restore('identities',{
      userid:userId
    }, callback);
  },

  restoreStories: function(callback){
    restore('stories', {authorid:userId}, callback);
  }

  }, function(e) {
  console.log();
  console.log('restoring is completed!');
  process.exit(1);
});

The full code is available at gist.github.com/4516139 and here:

var async = require('async');
var ProgressBar = require('progress');
var monk = require('monk');
var ms = require('ms');
var ObjectId=require('mongodb').ObjectID;

var dest = monk('localhost:27017/storify_localhost');
var backup = monk('localhost:27017/storify_backup');

var userId = ObjectId(YOUR-OBJECT-ID); // monk should have auto casting but we need it for queries

var restore = function(collection, query, callback){
  console.info('restoring from ' + collection);
  var q = query;
  backup.get(collection).count(q, function(e, n) {
    console.log('found '+n+' '+collection);
    if (e) console.error(e);
    var bar = new ProgressBar('[:bar] :current/:total :percent :etas', { total: n-1, width: 40 })
    var tick = function(e) {
      if (e) {
        console.error(e);
        bar.tick();
      }
      else {
        bar.tick();
      }
      if (bar.complete) {
        console.log();
        console.log('restoring '+collection+' is completed');
        callback();                
      }
    };
    if (n>0){
      console.log('adding '+ n+ ' '+collection);
      backup.get(collection).find(q, { stream: true }).each(function(element) {
        dest.get(collection).insert(element, tick);
      });        
    } else {
      callback();
    }
  });
}

async.series({
  restoreUser: function(callback){   // import user element
    backup.get('users').find({_id:userId}, { stream: true, limit: 1 }).each(function(user) {
      dest.get('users').insert(user, function(e){
        if (e) {
          console.log(e);
        }
        else {
          console.log('resored user: '+ user.username);
        }
        callback();
      });
    });
  },

  restoreIdentity: function(callback){  
    restore('identities',{
      userid:userId
    }, callback);
  },

  restoreStories: function(callback){
    restore('stories', {authorid:userId}, callback);
  }

  }, function(e) {
  console.log();
  console.log('restoring is completed!');
  process.exit(1);
});
           

To launch it, run npm install/update and change hard-coded database values.

Sample of Rapid Prototyping with JS

Free sample chapter of Rapid Prototyping with JS which is a hands-on book which introduces you to rapid software prototyping using the latest cutting-edge web and mobile technologies including NodeJS, MongoDB, BackboneJS, Twitter Bootstrap, LESS, jQuery, Parse.com, Heroku and others.

Rapid Prototyping with JS is a hands-on book which introduces you to rapid software prototyping using the latest cutting-edge web and mobile technologies including NodeJS, MongoDB, BackboneJS, Twitter Bootstrap, LESS, jQuery, Parse.com, Heroku and others.

Rapid Prototyping with JS

Here is a free sample, first chapter — Introduction, of Rapid Prototyping with JS. You can also get a free PDF from LeanPub and explore code examples at github.com/azat-co/rpjs. To buy a full version in PDF, Mobi/Kindle and ePub/iPad formats go to leanpub.com/rapid-prototyping-with-js.

Introduction

Rapid Prototyping with JS is a hands-on book which introduces you to rapid software prototyping using the latest cutting-edge web and mobile technologies including Node.js, MongoDB, Twitter Bootstrap, LESS, jQuery, Parse.com, Heroku and others.

Who This Book is For

The book is designed for advanced-beginner and intermediate level web and mobile developers: somebody who has just started programming and somebody who is an expert in other languages like Ruby on Rails, PHP, and Java and wants to learn JavaScript and Node.js.

Rapid Prototyping with JS, as you can tell from the name, is about taking your idea to a functional prototype in the form of a web or a mobile application as fast as possible. This thinking adheres to the Lean Startup methodology; therefore, this book would be more valuable to startup founders, but big companies’ employees might also find it useful, especially if they plan to add new skills to their resume.

Prerequisite

Mac OS X or UNIX/Linux systems are highly recommended for this book’s examples and for web development in general, although it’s still possible to hack your way on a Windows-based system.

Some cloud services require users’ credit/debit card information even for free accounts.

What to Expect

Expect a lot of coding and not much of a theory. All the theory we cover is directly related to some of the practical aspects and essential for better understanding of technologies and specific approaches in dealing with them, e.g., JSONP and cross-domain calls.

In addition to coding examples, the book covers virtually all setup and deployment step-by-step.

You’ll learn on the example of Message Board web/mobile applications starting with front-end components. There are a few versions of these applications, but by the end we’ll put front-end and back-end together and deploy to production environment. The Message Board application contains all the necessary components typical for a basic web app, and will give you enough confidence to continue developing on your own, apply for a job/promotion or build a startup!

This is a digital version of the book, so most of the links are hidden just like on any other web page, e.g., jQuery instead of http://jquery.com. The content of the book has local hyperlinks which allow you to jump to any section.

All the source code for examples used in this book is available in the book as well as in a public GitHub repository github.com/azat-co/rpjs. You can also download files as a ZIP archive or use Git to pull them. More on how to install and use Git will be covered later in the book. The source code files, folder structure and deployment files are supposed to work locally and/or remotely on PaaS solutions, i.e., Windows Azure and Heroku, with minor or no modifications.

Notation

This is what source code blocks look like:

var object = {};
object.name = "Bob";

Terminal commands have a similar look but start with dollar sign, $:

$ git push origin heroku
$ cd /etc/
$ ls 

Inline filenames, path/folder names, quotes and special words/names are italicized while command names, e.g., mongod, and emphasized words, e.g., Note, are bold.

Web Basics

Overview

The bigger picture of web and mobile application development consists of the following steps:

  1. User types a URL or follows a link in her browser (aka client);
  2. Browser makes HTTP request to the server;
  3. Server processes the request, and if there’re any parameters in a query string and/or body of the request takes them into account;
  4. Server updates/gets/transforms data in the database;
  5. Server responds with HTTP response containing data in HTML, JSON or other formats;
  6. Browser receives HTTP response;
  7. Browser renders HTTP response to the user in HTML or any other format, e.g., JPEG, XML, JSON.

Mobile applications act in the same manner as regular websites, only instead of a browser there might be a native app. Other minor differences include: data transfer limitation due to carrier bandwidth, smaller screens, and the more efficient use of the local storage.

There are a few approaches to mobile development, each with its own advantages and disadvantages:

  • Native iOS, Android, Blackberry apps build with Objective-C and Java;
  • Native apps build with JavaScript in Appcelerator and then complied into native Objective-C or Java;
  • Mobile websites tailored for smaller screens with responsive design, CSS frameworks like Twitter Bootstrap or Foundation, regular CSS or different templates;
  • HTML5 apps which consists of HTML, CSS and JavaScript, and are usually build with frameworks like Sencha Touch, Trigger.io, JO, and then wrapped into native app with PhoneGap.

Hyper Text Markup Language

Hyper Text Markup Language, or HTML, is not a programming language in itself. It is a set of markup tags which describes the content and presents it in a structured and formatted way. HTML tags consist of a tag name inside of the angle brackets (<>). In most cases tags surround the content with the end tag having forward slash before the tag name.

In this example each line is an HTML element:

<h2>Overview of HTML</h2>
<div>HTML is a ...</div>
<link rel="stylesheet" type="text/css" href="style.css" />

The HTML document itself is an element of html tag and all other elements are children of that html tag:

<!DOCTYPE html>
<html lang="en">
  <head>
    <link rel="stylesheet" type="text/css" href="style.css" />
  </head>
  <body>
    <h2>Overview of HTML</h2>
    <p>HTML is a ...</p>
  </body>
</html>

There are different flavors and versions of HTML, e.g., DHTML, XHTML 1.0, XHTML 1.1, XHTML 2, HTML 4, HTML 5. This article does a good job of explaining the differences — Misunderstanding Markup: XHTML 2/HTML 5 Comic Strip.

More information is available at Wikipedia and w3schools.

Cascading Style Sheets

Cascading Style Sheets, or CSS, is a way to format and present content. An HTML document can have several stylesheets with the tag link as in previous examples or style tag:

<style>
  body {
  padding-top: 60px; /* 60px to make some space */
  }
</style>

Each HTML element can have id and class attribute:

<div id="main" class="large">Lorem ipsum dolor sit amet,  Duis sit amet neque eu.</div>

In CSS we access elements by their id, class, tag name and in some edge cases by parent-child relationship or element attribute value:

p {
  color:#999999;
}
div#main {
  padding-bottom:2em;
  padding-top:3em;
}
.large {
  font-size:14pt;
}
body > div {
  display:none;         
}
input[name="email"] {
  width:150px;
}

More information for further reading is available at Wikipedia and w3schools.

CSS3 is an upgrade to CSS which includes new ways of doing things such as rounded corners, borders and gradients, which were possible in regular CSS only with the help of PNG/GIF images and by using other tricks.

For more information refer to CSS3.info, w3school
and CSS3 vs CSS comparison article on Smashing.

JavaScript

JavaScript was started in 1995 at Netscape as LiveScript. It has the same relationship with Java as a hamster and a ham :)
It is used for both client and server side development as well as in desktop applications.

There is a script tag to use JavaScript in the HTML document:

<script type="text/javascript" language="javascript>
  alert("Hello world!");
  //simple alert dialog window
</script>

Usually it a good idea to separate JavaScript code from HTML; in this example we include app.js file:

<script src="js/app.js" type="text/javascript" language="javascript" />

Here are the main types of JavaScript objects/classes:

  • Array object, e.g., var arr = ["apple", "orange", 'kiwi"];
  • Boolean primitive object, e.g., var bool = true;
  • Date object, e.g., var d = new Date();
  • Math object, e.g., var x = Math.floor(3.4890);
  • Number primitive object, e.g., var num = 1;
  • String primitive object, e.g., var str = "some string";
  • RegExp object, e.g., var pattern = /[A-Z]+/;
  • Global properties and functions, e.g., NaN
  • Browser objects, e.g., window.location = 'http://google.com';
  • DOM objects, e.g., var table = document.createElement('table');

Full JavaScript and DOM objects and classes reference with examples are available at w3school.

Typical syntax for function declaration:

function Sum(a,b) {
  var sum = a+b;
  return sum;
}
console.log(Sum(1,2));

Functions in JavaScript are first-class citizens due to functional programming nature of the language. Therefore functions can be used as other variables/objects; for example, functions can be passed to other functions as arguments:

var f = function (str1){
  return function(str2){
  return str1+' '+str2;
  };
};
var a = f('hello');
var b = f('goodbye');
console.log((a('Catty'));
console.log((b('Doggy'));

JavaScript has a loose/weak typing, as opposed to strong typing in languages like C and Java, which makes JavaScript a better programming language for prototyping.

More information about browser-run JavaScript is available at Wikipedia and w3schools.

Agile Methodologies

Agile software development methodology evolved due to the fact that traditional methods, like Waterfall, weren’t good enough in situations of high unpredictability, i.e., when the solution is unknown. Agile methodology includes Scrum/Sprint, Test-Driven Development, Continuous Deployment, Paired Programming and other practical techniques many of which were borrowed from Extreme Programming.

Scrum

In regard to the management, Agile methodology uses Scrum approach. More about Scrum can be read at:

Scrum methodology is a sequence of short cycles, and each cycle is called sprint. One sprint usually lasts from one to two weeks. Sprint starts and ends with sprint planning meeting where new tasks can be assigned to team members. New tasks cannot be added to the sprint in progress; they can be added only at the sprint meetings.

An essential part of the Scrum methodology is the daily scrum meeting, hence the name. Each scrum is a 5–15 minutes long meeting which is often conducted in the hallways. On scrum meetings each team member answers three questions:

  1. What have you done since yesterday?
  2. What are you going to do today?
  3. Do you need anything from other team members?

Flexibility makes Agile an improvement over Waterfall methodology, especially in situations of high uncertainty, i.e., startups.

Advantage of Scrum methodology: effective where it is hard to plan ahead of the time, and also in situations where a feedback loop is used as a main decision-making authority.

Test-Driven Development

Test-Driven Development, or TDD, consists of following steps:

  1. Write failing automated test cases for new feature/task or enhancement by using assertions that are either true or false.
  2. Write code to successfully pass the test cases.
  3. Refactor code if needed, and add functionality while keeping the test cases passed.
  4. Repeat until the task is complete.

Advantages of Test-Driven Development:

  • fewer bugs/defects,
  • more efficient codebase,
  • provides programmers with confidence that code works and doesn’t break old functionality.

Continuous Deployment

Continuous Deployment, or CD, is the set of techniques to rapidly deliver new features, bug fixes, and enhancements to the customers. CD includes automated testing and automated deployment. By utilizing Continuous Deployment the manual overheard is decreased, and the feedback loop time is minimized. Basically, the faster developer can get the feedback from the customers, the sooner the product can pivot, which leads to more advantages over the competition. Many startups deploy multiple times in a single day in comparison to the 6–12 month release cycle which is still typical for corporations and big companies.

One of the most popular solutions for CD is Continuous Integration server Jenkins.

Advantages of Continuous Deployment approach: decreases feedback loop time and manual labor overhead.

Pair Programming

Pair Programming is a technique when two developers work together on one machine. One of the developers is a driver and the other is observer. The driver writes the code and the observer watches it, assists, and makes suggestions. Then they switch the roles. The driver has a more tactical role of focusing on the current task. In contrast, the observer has a more strategic role overseeing “the bigger picture,” and the ways to improve the codebase and to make it more efficient.

Advantages of Paired Programming:

  • Pair attributes to shorter and more efficient codebase, and introduces fewer bugs and defects.
  • As an added bonus, knowledge is passed along programmers as they work together. However, situations of conflicts between developers are possible.

Node.js

Node.js is an event-driven asynchronous I/O server-side technology for building scalable and efficient web servers. Node.js consists of Google’s V8 JavaScript engine.

The purpose and use of Node.js is similar to Twisted for Python and EventMachine for Ruby. The JavaScript implementation of Node was the third one after attempts at using the Ruby and C++ programming languages.

Node.js is not in itself a framework like Ruby on Rails; it’s more comparable to the pair PHP+Apache. Here are some of Node.js frameworks: Express, Meteor, Tower.js, Railsway JS, Geddy, Derby.

Advantages of using NodeJS:

  • Developers have high chances of familiarity with JavaScript due to its status as a de facto standard of the application development for web and mobile.
  • One language for front-end and back-end development speeds up coding process. A developer’s brain doesn’t have to switch between different syntaxes. The learning of methods and classes goes faster.
  • With NodeJS, you could prototype quickly and go to market to do your customer development and customer acquisition early. This is an important competitive advantage over the other companies, which use less agile technologies, e.g., PHP and MySQL.
  • NodeJS is build to support real-time applications by utilizing web-sockets.

For more information go to Wikipedia, Nodejs.org, and articles on ReadWrite and O’Reilly.

NoSQL and MongoDB

MongoDB, from huMONGOus, is a high-performance no-relationship database for huge data. NoSQL concept came out when traditional Relational Database Management Systems, or RDBMS, were unable to meet the challenges of huge amounts of data.

Advantages of using MongoDB:

  • Scalable due to distributed nature: multiple servers and data centers could have redundant data.
  • High-performance: MongoDB is very effective for storing and retrieving data, not the relationship between elements.
  • Key-value store is ideal for prototyping because it doesn’t require one to know the schema and there is no need for a fixed data model.

Cloud Computing

Could computing consists of:

  • Infrastructure as s Service (IaaS), e.g., Rackspace, Amazon Web Services;
  • Platform as a Service (PaaS), e.g., Heroku, Windows Azure;
  • Software as a Service (SaaS), e.g., Google Apps, Salesforce.com.

Cloud application platforms provide:

  • scalability, e.g., spawn new instances in a matter of minutes;
  • easy deployment, e.g., to push to Heroku you can just use $ git push;
  • pay-as-you-go plan: add or remove memory and disk space based on demands;
  • usually there is no need to install and configure databases, app servers, packages, etc.;
  • security and support.

PaaS are ideal for prototyping, building minimal viable products (MVP) and for early stage startups in general.

Here is the list of most popular PaaS solutions:

HTTP Requests and Responses

Each HTTP Request and Response consists of the following components:

  1. Header: information about encoding, length of the body, origin, content type, etc.;
  2. Body: content, usually parameters or data which is passed to the server or sent back to a client;

In addition, HTTP Request contains:

  • Method: There are several methods; the most common are GET, POST, PUT, DELETE.
  • URL: host, port, path;
  • Query string, i.e., everything after a question mark in the URL.

RESTful API

RESTful (REpresentational State Transfer) API became popular due to the demand in distributed systems where each transaction needs to include enough information about the state of the client. In a sense this standard is stateless because no information about the clients’ state is stored on the server, thus making it possible for each request to be served by a different system.

Distinct characteristics of RESTful API:

  • Has better scalability support due to the fact that different components can be independently deployed to different servers;
  • Replaced Simple Object Access Protocol (SOAP) because of the simpler verb and noun structure;
  • Utilizes HTTP methods: GET, POST, DELETE, PUT, OPTIONS etc.

Here is an example of simple Create, Read, Update and Delete (CRUD) REST API for Message Collection:

Method URL Meaning
GET /messages.json Return list of messages in JSON format
PUT /messages.json Update/replace all messages and return status/error in JSON
POST /messages.json Create new message and return its id in JSON format
GET /messages/{id}.json Return message with id {id} in JSON format
PUT /messages/{id}.json Update/replace message with id {id}, if {id} message doesn’t exists create it
DELETE /messages/{id}.json Delete message with id {id}, return status/error in JSON format

REST is not a protocol; it is an architecture in the sense that it’s more flexible than SOAP, which is a protocol. Therefore, REST API URLs could look like /messages/list.html or /messages/list.xml in case we want to support these formats.

PUT and DELETE are idempotent methods, which means that if the server receives two or more similar requests, the end result will be the same.

GET is nullipotent and POST is not idempotent and might affect state and cause side-effects.

Further reading on REST API at Wikipedia and A Brief Introduction to REST article.

Wintersmith — Node.js static site generator

This past weekend was a very productive one for me, because I’ve started to work on and released my book’s one-page website —rapidprototypingwithjs.com. I’ve used Wintersmith to learn something new and to ship fast. Wintersmith is a Node.js static site generator. It greatly impressed me with flexibility and ease of development. In addition I could stick to my favorite tools such as Markdown, Jade and Underscore.

This past weekend was a very productive one for me, because I’ve started to work on and released my book’s one-page website —rapidprototypingwithjs.com. I’ve used Wintersmith to learn something new and to ship fast. Wintersmith is a Node.js static site generator. It greatly impressed me with flexibility and ease of development. In addition I could stick to my favorite tools such as Markdown, Jade and Underscore.

Wintersmith is a Node.js static site generator

Why Static Site Generators

Here is a good article on why using a static site generator is a good idea in general, An Introduction to Static Site Generators. It basically boils down to a few main things:

Templates

You can use template engine such as Jade. Jade uses whitespaces to structure nested elements and its syntax is similar to Ruby on Rail’s Haml markup.

Markdown

I’ve copied markdown text from my book’s Introduction chapter and used it without any modifications. Wintersmith comes with marked parser by default. More on why Markdown is great in my old post, Markdown Goodness.

Simple Deployment

Everything is HTML, CSS and JavaScript so you just upload the files with FTP client, e.g., Transmit by Panic or Cyberduck.

Basic Hosting

Due to the fact that any static web server will work well, there is no need for Heroku or Nodejitsu PaaS solutions, or even PHP/MySQL hosting.

Performance

There are no database calls, no server-side API calls, no CPU/RAM overhead.

Flexibility

Wintersmith allows for different plugins for contents and templates and you can even write you own plugin.

Getting Started with Wintersmith

There is a quick getting started guide on github.com/jnordberg/wintersmith.

To install Wintersmith globally, run NPM with -g and sudo:

$ sudo npm install wintersmith -g

Then run to use default blog template:

$ wintersmith new <path>

or for empty site:

$ wintersmith new <path> -template basic

or use a shortcut:

$ wintersmith new <path> -T basic

Similar to Ruby on Rails scaffolding Wintersmith will generate a basic skeleton with contents and templates folders. To preview a website, run these commands:

$ cd <path>
$ wintersmith preview
$ open http://localhost:8080

Most of the changes will be updates automatically in the preview mode except for the config.json file.

Images, CSS, JavaScript and other files go into contents folder.
Wintersmith generator has the following logic:

  1. looks for *.md files in contents folder,
  2. reads metadata such as template name,
  3. processes *.jade templates per metadate in *.md files.

When you’re done with your static site, just run:

$ wintersmith build

Other Static Site Generators

Here are some of the other Node.js static site generators:

More detailed overview of these static site generators is available in the post, Node Based Static Site Generators.

For other languages and frameworks like Rails and PHP take a look at Static Site Generators by GitHub Watcher Count and the “mother of all site generator lists”.

Querying 20M-Record MongoDB Collection

Storify saves a lot of meta data about social elements: tweets, Facebook status updates, blog posts, news articles, etc. MongoDB is great for storing such unstructured data but last week I had to fix some inconsistency in 20-million-record Elements collection.

Storify saves a lot of meta data about social elements: tweets, Facebook status updates, blog posts, news articles, etc. MongoDB is great for storing such unstructured data but last week I had to fix some inconsistency in 20-million-record Elements collection.

The script was simple: find elements, see if there are no dependencies, delete orphan elements, neveretheless it was timing out or just becoming unresponsive. After a few hours of running different modifications I came up with the working solution.

Here are some of the suggestions when dealing with big collections on Node.js + MongoDB stack:

Befriend Shell

Interactive shell, or mongo, is a good place to start. To launch it, just type mongo in your terminal window:

$ mongo

Assuming you have correct paths set-up during your MongoDB installation, the command will start the shell and present angle brace.

>

Use JS files

To execute JavaScript file in a Mongo shell run:

$ mongo fix.js --shell

Queries look the same:

db.elements.find({...}).limit(10).forEach(printjson);

To output results use:

print();

or

printjson();

To connect to a database:

db = connect("<host>:<port>/<dbname>")

Break Down

Separate your query into a few scripts with smaller queries. You can output each script to a file (as JSON or CSV) and then look at the output and see if your script is doing what it is actually supposed to do.

To execute JavaScript file (fix.js) and output results into another file (fix.txt) instead of the screen, use:

$ mongo fix.js > fix.txt --shell

or

$ mongo --quiet fix.js > fix.txt --shell

Check count()

Simply run count() to see the number of elements in the collection:

 db.collection.count();

or a cursor:

 db.collection.find({…}).count();

Use limit()

You can apply limit() function to your cursor without modifying anything else in a script to test the output without spending too much time waiting for the whole result.

For example:

 db.find({…}).limit(10).forEach(function() {…});

or

 db.find({…}).limit(1).forEach(function() {…});

is better than using:

 db.findOne({…})

because findOne() returns single document while find() and limit() still returns a cursor.

Hit Index

hint() index will allow you to manually use particular index:

 db.elemetns.find({…}).hint({active:1, status:1, slug:1});

Make sure you have actual indexes with ensureIndex():

 db.collection.ensureIndex({…})

Narrow Down

Use additional criteria such as $ne, $where, $in, e.g.:

db.elements.find({ $and:[{type:'link'}
  ,{"source.href":{$exists:true}}
  ,{'date.created':{$gt: new Date("November 30 2012")}}
  ,{$where: function () {
    if (this.meta&&this.data&&this.data&&this.data.link) {
      return this.meta.title!=this.data.link.title;
    } else {
      return false;
    }}} 
  , {'date.created': {$lt: new Date("December 2 2012")}}]}).forEach(function(e, index, array){
    print(e._id.str);
    });

My First Week At Storify

Last week I joined Storify — a destination for curated social media news. Storify helps you sort through the noise to find the voices online that matter.

Last week I joined Storify — a destination for curated social media news. Storify helps you sort through the noise to find the voices online that matter. To find more about Storify take a look at the guided tour.

Storify co-founder Burt and I met a couple months ago for the first time and I’m glad that we did. There were three main reasons for me to come on board: great team, awesome product and company vision, and cool tech stack that I’m passionate about: Node.js+Express+MongoDB.

Storify on Nodejs.org
Storify on Nodejs.org

The first week at Storify exceeded my expectations! So far there were: 4 team lunches, one birthday party, two (!) break-ins. In addition, I’ve worked on the front-page on my second day and had a chance to SSH to production servers.

A few word about the office, besides free snacks and espresso and being close to everything, there are two other startups, Buffer and HomeLight. The funny thing is that I’ve discovered and fallen in love with Buffer just a few weeks ago and now I’ve met with Leo and sit next to their brilliant team!

By the way, Storify is hiring bright minds: Operations Engineer and Front-End Engineer. If you want to do work on interesting things check out full job description.

Pilot Rapid Prototyping with JavaScript and NodeJS Class

Traditional Computer Science education sucks big time when it comes to modern agile technologies like Ruby on Rails, Django, NodeJS, and NoSQL databases. Last time I checked, the maximum that was offered were classes in Web Design I, Web Design II and Photoshop Basics. WTF?! Don’t get me wrong. I have Master’s degree in Information Systems Technology and value fundamentals, but I was never taught anything up-to-date. There was some ASP, some C++, some SQL, but most of my learning I had to do on my own. Sure there are tons information online and in books, but not everybody has time, dedication, focus and self-discipline to master a new technical skill this way. Reading a book or watching a screencast is just not enough. The best learning comes from 25% books, 25% peer-to-peer communication and discussion, 25% student-to-teacher relationship; the last 25% is the time and practice on your own.

I saw a huge need for effective technical trainings and decided to validate my idea. I already had plenty of teaching experience from college years, during which I wrote my first textbook, had it published on a curriculum for my classmates a year later, and from teaching yoga classes. I needed a pilot class, so I approached startup accelerator and fund, StartupMonthly, and offered to develop and teach the “Rapid Prototyping with JavaScript and NodeJS” training.

I chose JavaScript and NodeJS because students will be able to use the same language both for front-end and back-end development. Their brains don’t have to switch thus saving time and speeding the learning process. NodeJS is becoming more and more popular due to its real-time support and I’m very passionate about this technology. The training runs over a long weekend, starting on Friday night with an optional Q&A session on setting up your environment. Then, we have two full days on Saturday and Sunday, making the course 16 hours total. This way, people who have full time jobs don’t have to take time off to attend. The class is very hands-on and, as much as possible, inline with the principles of Flipped Teaching.

Rapid Prototyping with JavaScript and NodeJS - Day 1
Day 1

The goal was not to make a profit. So we priced the training very aggressively twice or thrice lower than the market price of our competitors in order to attract students. The results were amazing! The goal was to sell at least 10 seats and we had 15 people in our first class! Big thanks to Yuri Rabinovich, killer StartupMonthly team and its vast network of people interested in technology :)

Rapid Prototyping with JavaScript and NodeJS - Day 2
Day 2

Then the hard work began. In a true spirit of lean startup methodology (hey, this is what we teach, right?) the manual had only a bare minimum of information and was tailored towards intermediate web and JavaScript developers. The majority was doing well, but I couldn’t say that for everyone. This was a good feedback for me, and helped to improve the manual by including many simple steps and additional terminal commands for deployment and Git.

Optimize but not too over optimize
“Optimize, but not over optimize”

Overall, students were tired, but happy with the number of new technologies they’ve tried. It was sort of a Chinese Buffet of Programming. You don’t have to try everything, you only pick what you want and indulge in it :) Here is the list of topics to give you an idea:

  • Agile, Continuous Deployment, TDD, Pair Programming
  • Basic front-end technologies: JavaScript, HTML, CSS
  • NodeJS and its advantages. Event driven programming.
  • MongoDB and Document Store and Key-Value concepts.
  • JSON, structure and examples.
  • Could computing. Cloud platforms: Windows Azure, Heroku.
  • Structure of HTTP Request and Response: headers, body, methods
  • RESTful API, examples and advantages.
  • Overview of HTML: structure, tags and syntax. Inclusion of CSS, JavaScript files/tags.
  • jQuery: AJAX, cross-domain calls and JSONP
  • Twitter Bootstrap: grid layout, form components, icons
  • LESS: mixins, variables and compilation.
  • BackboneJS: structure, events, view, sub-views, models, collections and event listeners and event binding.
  • Parse.com: plain REST API calls with jQuery ajax function and JavaScript SDK with Backbone compatible library.
  • Generating of SSH keys, configuring Git, GitHub, Heroku and Windows Azure for deployment.
  • Installation and basic configuration of NodeJS and MongoDB in local environment.
  • Deployment of NodeJS and MongoDB and static/front-end applications to PaaS cloud services like Windows Azure and Heroku with Git.
  • Building sample applications with NodeJS, jQuery, BackboneJS, Twitter Bootstrap, MongoDB, Parse.com and other tools/technologies. Deploying it to cloud services.
  • Building your own idea/prototype and presenting it. Deploying it to cloud services.
  • Practicing Paired Programming and Test-Driven Development techniques.
Next Billion-Dollar Idea
Next Billion-Dollar Idea

By the end of the weekend, we had 3 teams with 2 to 3 people in each. The teams built or started to build applications using their own ideas. One of them was a remake of Reddit with better UX/UI and the other was a service for angry ex-girlfriends to post (mostly negative I suspect) feedbacks on their ex-boyfriends :)

Here are some testimonials from the students:

“Thanks Yuri and all of you folks. It was a great session – very educative, and it certainly helped me brush up on my Javascript skills. Look forward to seeing/working with you in the future” – Sam Sur.

“Thanks for putting this workshop together this weekend… what we did with Bootstrap + Parse was really quick & awesome” – Mariya Yao.

“Thanks a lot to all and special thanks to Azat and Yuri.
I enjoyed it a lot and felt motivated to work hard to know these technologies” – Shelly Arora.

Q&A Session
Q&A Session

Next weekend, August 10–12 2012, I’m teaching the second class of “Rapid Prototyping with JavaScript and NodeJS”. I’m exited to share my experience and passion with another 10–20 smart people and make a small dent in technical education!

“Advanced Prototyping with JavaScript and NodeJS” and “Mobile Prototyping with JavaScript” trainings are coming on the weekend of August 25–26 2012. We have other cities like Los Angeles and New York in a pipeline and, (knock on wood) the future for “Rapid Prototyping” series looks very promising .