Triage vs. Planning

A discussion came up in at my work about distinction between a triage and planning meetings. My take on this is that triage reactive whereas planning is active.

Let me illustrate this with examples. Imagine a customer-facing app like a WordPress CMS. Users use the CMS, encounter bugs, and curse. They sometimes report the bugs. An engineering team or a product manager will triage the incoming bugs and issues to sort out what need an urgent fix and what can be deferred. Bugs tend to be urgent but not always important (at least not important for the majority of users).

On the other hand, there is an important task. The CMS has a roadmap to add a paid feature that should increase company revenue and make the next year profitable. The paid feature needs to be implemented. It’s the top priority. Its implementation must be planned actively, separately and before any bugs. If not planned, bugs can take up all the time.

Thus, my suggestion is to plan and plan the priority first. Then plan the triaged work on bugs and tech debt. Focus on important first, not urgent.

Breaking Into IT and Tech as a Beginner

I got an email from a person frustrated that he can’t get an entry-level job in IT/tech. He knows PHP, HTML, CSS and MySQL, but he is tired of all the companies rejecting him and requiring a “perfect” expert (as he put it). That’s true that there are not that many entry-level jobs in tech. It’s hard break into tech. Most companies only interview senior engineers with at least five (5) years of industry experience.

Continue reading “Breaking Into IT and Tech as a Beginner”

7 Tech Job Which Don’t Require Coding

Technology is the fastest growing sector in the job market. Software, cloud and automation replace traditional jobs of factory workers, secretaries and service workers. Software and technology companies are the most valued by the stock market and investors. Founders of these companies are one of the richest people in the world. Startup founders and nerds are new role models for kids.

But what if you are not coding prodigy like Mark Zuckerberg or Bill Gates who started coding in their teens. What if you don’t really enjoy coding that much or maybe you are more of a peoples or a liberal arts type of a person? Do you prohibited from a tech industry? Most people don’t know that they are plenty of jobs in the tech industry which do not require coding.

Of course, you cannot be a clueless pumpkin and know nothing about tech. You still have to be technically literate and know what is a database or an API is, but you’ll mainly be leveraging your existing skills from another industry, not starting from a scratch learning coding. (Learning coding when you are in 50s are still possible. I saw it happen a Hack Reactor where I taught. But let’s admit, on average the wits become duller with age, not sharper.)

Here are seven (7) such jobs which do not require coding or deep technical expertise but can be interesting, fulfilling, and well-paid.

  1. Program Manager
  2. Product Manager/Owner
  3. Scrum Master
  4. Designer
  5. User Researcher
  6. Recruiter
  7. Tech Writer

Let’s me give you some brief insight into each of them.

Continue reading “7 Tech Job Which Don’t Require Coding”

DocuSign for BlackBerry 10 in a Few Hours

Last week DocuSign engineering had a second internal hackathon (a nice collage resulting from the first one) and I built DocuSign for BlackBerry 10 in just a few hours (well, sort of).

Some of you might be wondering what BlackBerry is. It’s a mobile operating system that is actually in the process of surpassing Windows and Kindle, based on market share, according to this reputable study. Oh, and by the way, BB also has many loyal aficionados thanks to their years of being a mobile solo-provider to enterprises.

Luckily, DocuSign already had an app for Android and, based on my research, which consisted of looking at this page for a minute, and talking with our mobile dev manager, I figured out that I could just port the DocuSign for Android app to the BlackBerry OS. Porting is just a fancy word for re-writing something in a new language, or for a new platform, without changing functionality very much. It typically involves re-compiling, changing APIs, updating code and re-packaging.

It’s worth noting that RIM (the company behind BlackBerry) provides many other options for building BlackBerry 10 apps including:

  • Adobe AIR
  • Native
  • HTML5

This is probably done to boost the number of available offerings within their marketplace (BlackBerry World) and jump-start their development ecosystem, which is lagging far behind those of iOS and Android.

I have worked with Java and J2EE before. In addition, I was an Adroid guy for a long time prior to getting my first Apple product (MacBook Air): I remember that my very first smartphone (running buggy Android 1.6) had constant “Force Close” errors. However, up until this hackathon, all of my forays into mobile dev land consisted only of using HTML5 with the Jo and Sencha Touch frameworks. Awesome challenge! Or so I thought.

The goal was to use Android code, re-package it, and install it in the simulator (with or without changes). I decided to go with Android Studio vs. Eclipse, and downloaded these tools (download page):

The complete tutorials are available at Runtime for Android apps.

After downloading, literally, a few gigabytes of SDKs and packages, I was stuck with our code-base in Android Studio due to some Java exceptions regarding Gradle, so I resorted to using command-line tools.

These are the commands from the BlackBerry toolchain that I ended up using:

  • apk2barVerifier: to verify apk (Android) files for compatibility for bar (BlackBerry)
  • apk2bar: to re-package apk to bar
  • blackberry-deploy: to upload and install the app on a BlackBerry

It’s worth noting that for distributing BlackBerry 10 apps to the BlackBerry world, apps must be signed with a special token (tutorial). Obviously, I skipped this step for the hackathon.

Lo and behold, everything worked on the BlackBerry with 0 code changes (except where DocuSign tries to charge via Google Play). The end results were quite pleasing. Thank you BlackBerry for making it easier for us developers. I guess, I can now say that I develop native apps (yeah, right). :-)

DocuSign for BlackBerry 10: Homescreen
DocuSign for BlackBerry 10: Homescreen
DocuSign for BlackBerry 10: Sign a Document
DocuSign for BlackBerry 10: Sign a Document
DocuSign for BlackBerry 10: Main Menu
DocuSign for BlackBerry 10: Main Menu
DocuSign for BlackBerry 10: Document View
DocuSign for BlackBerry 10: Document View

In addition, I also found this neat but scammy-looking service called APK Downloader that allows us to install Android apps from the Google Play market directly onto the latest BlackBerry 10 systems. Simply enter the name of the app as a Java package, e.g., com.docusign.ink, (link). The real hack! I could have used it from the beginning and saved myself a few hours. Therefore, it’s vital to conduct proper up-front research prior to embarking on a project! ☺

My First Week at DocuSign

For those of you unfamiliar with DocuSign, it’s an industry leader in sending, signing and managing documents in the cloud. Contrary to its competitors (EchoSign, HelloSign and RightSignature), DocuSign is more enterprise oriented, the oldest (founded in 2003), and the most advanced in terms of security and number features. Continue reading “My First Week at DocuSign”

First Six Months with Storify

Time goes fast! It’s been six months since I’ve joined Storify in December 2012. Many cool things have happened, including a bunch of new releases, a company retreat and a hackweek.

Time goes fast! It’s been six months since I’ve joined Storify in December 2012. Many cool things have happened, including a bunch of new releases, a company retreat and a hackweek.

Continue reading “First Six Months with Storify”

Node.js OAuth1.0 and OAuth2.0: Twitter API v1.1 Examples

Recently we had to work on modification to accommodate Twitter API v1.1. The main difference between Twitter API v1.1 and, soon to be deprecated, Twitter API v1.0 is that most of the REST API endpoints now require user or application context. In other words, each call needs to be performed via OAuth 1.0A or OAuth 2.0 authentication.

Recently we had to work on modification to accommodate Twitter API v1.1. The main difference between Twitter API v1.1 and, soon to be deprecated, Twitter API v1.0 is that most of the REST API endpoints now require user or application context. In other words, each call needs to be performed via OAuth 1.0A or OAuth 2.0 authentication.

Node.js OAuth
Node.js OAuth

At Storify we run everything on Node.js so it was natural that we used oauth module by Ciaran Jessup: NPM and GitHub. It’s mature and supports all the needed functionality but lacks any kind of examples and/or interface documentation.

Here are the examples of calling Twitter API v1.1, and a list of methods. I hope that nobody will have to dig through the oauth module source code anymore!

OAuth 1.0

Let start with a good old OAuth 1.0A. You’ll need four values to make this type of a request to Twitter API v1.1 (or any other service):

  1. Your Twitter application key, a.k.a., consumer key
  2. Your Twitter secret key
  3. User token for your app
  4. User secret for your app

All four of them can be obtained for your own apps at dev.twitter.com. In case that the user is not youself, you’ll need to perform 3-legged OAuth, or Sign in with Twitter, or something else.

Next we create oauth object with parameters, and call get() function to fetch a secured resource. Behind the scene get() function constructs unique values for the request header — Authorization header. The method encrypts URL, timestamp, application and other information in a signature. So the same header won’t work for another URL or after a specific time window.

var OAuth = require('OAuth');
var oauth = new OAuth.OAuth(
      'https://api.twitter.com/oauth/request_token',
      'https://api.twitter.com/oauth/access_token',
      'your Twitter application consumer key',
      'your Twitter application secret',
      '1.0A',
      null,
      'HMAC-SHA1'
    );
    oauth.get(
      'https://api.twitter.com/1.1/trends/place.json?id=23424977',
      'your user token for this app', 
      //you can get it at dev.twitter.com for your own apps
      'your user secret for this app', 
      //you can get it at dev.twitter.com for your own apps
      function (e, data, res){
        if (e) console.error(e);        
        console.log(require('util').inspect(data));
        done();      
      });    
});

OAuth Echo

OAuth Echo is similar to OAuth 1.0. If you’re a Delegator (service to which requests to Service Provider are delegated by Consumer) all you need to do is just pass the value of x-verify-credentials-authorization header to the Service Provider in Authorization header. Twitter has a good graphics on OAuth Echo.

There is OAuthEcho object which inherits must of its methods from normal OAuth class. In case if you want to write Consumer code (or for functional tests, in our case Storify is the delegator) and you need x-verify-credentials-authorization/Authorization header values, there is a authHeader method. If we look at it, we can easily reconstruct the headers with internal methods of oauth module such as _prepareParameters() and _buildAuthorizationHeaders(). Here is a function that will give us required values based on URL (remember that URL is a part of Authorization header):

  function getEchoAuth(url) { 
  //helper to construct echo/oauth headers from URL
    var oauth = new OAuth('https://api.twitter.com/oauth/request_token',
      'https://api.twitter.com/oauth/access_token',
      "AAAAAAAAAAAAAAAAAAAA",
      //test app token
      "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB", 
      //test app secret
    '1.0A',
    null,
      'HMAC-SHA1');
    var orderedParams = oauth._prepareParameters(
      "1111111111-AAAAAA", //test user token
    "AAAAAAAAAAAAAAAAAAAAAAA", //test user secret
    "GET",
    url
    );
    return oauth._buildAuthorizationHeaders(orderedParams);
  }

From your consumer code you can maker request with superagent or other http client library (e.g., node.js core http module’s http.request):

var request = require('super agent');

request.post('your delegator api url')
  .send({...}) 	
  //your json data
  .set(
    'x-auth-service-provider',
    'https://api.twitter.com/1.1/account/verify_credentials.json')
  .set(
    'x-verify-credentials-authorization',
    getEchoAuth("https://api.twitter.com/1.1/account/verify_credentials.json"))
  .end(function(res){console.log(res.body)});

OAuth2

OAuth 2.0 is a breeze to use comparing to the other authentication methods. Some argue that it’s not as secure, so make sure that you use SSL and HTTPS for all requests.

 var OAuth2 = OAuth.OAuth2;    
 var twitterConsumerKey = 'your key';
 var twitterConsumerSecret = 'your secret';
 var oauth2 = new OAuth2(
   twitterconsumerKey,
   twitterConsumerSecret, 
   'https://api.twitter.com/', 
   null,
   'oauth2/token', 
   null);
 oauth2.getOAuthAccessToken(
   '',
   {'grant_type':'client_credentials'},
   function (e, access_token, refresh_token, results){
     console.log('bearer: ',access_token);
     oauth2.get('protected url', 
       access_token, function(e,data,res) {
         if (e) return callback(e, null);
         if (res.statusCode!=200) 
           return callback(new Error(
             'OAuth2 request failed: '+
             res.statusCode),null);
         try {
           data = JSON.parse(data);        
         }
         catch (e){
           return callback(e, null);
         }
         return callback(e, data);
      });
   });

Please note the JSON.parse() function, oauth module returns string, not a JavaScript object.

Consumers of OAuth2 don’t need to fetch the bearer/access token for every request. It’s okay to do it once and save value in the database. Therefore, we can make requests to protected resources (i.e. Twitter API v.1.1) with only one secret password. For more information check out Twitter application only auth.

Node.js oauth API

Node.js oauth OAuth

oauth.OAuth()

Parameters:

  • requestUrl
  • accessUrl
  • consumerKey
  • consumerSecret
  • version
  • authorize_callback
  • signatureMethod
  • nonceSize
  • customHeaders

Node.js oauth OAuthEcho

oauth.OAuthEcho()

Parameters:

  • realm
  • verify_credentials
  • consumerKey
  • consumerSecret
  • version
  • signatureMethod
  • nonceSize
  • customHeaders

OAuthEcho sharers the same methods as OAuth

Node.js oauth Methods

Secure HTTP request methods for OAuth and OAuthEcho classes:

OAuth.get()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • callback

OAuth.delete()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • callback

OAuth.put()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • post_body
  • post_content_type
  • callback

OAuth.post()

Parameters:

  • url
  • oauth_token
  • oauth_token_secret
  • post_body
  • post_content_type
  • callback

https://github.com/ciaranj/node-oauth/blob/master/lib/oauth.js

Node.js oauth OAuth2

OAuth2 Class

OAuth2()

Parameters:

  • clientId
  • clientSecret
  • baseSite
  • authorizePath
  • accessTokenPath
  • customHeaders

OAuth2.getOAuthAccessToken()

Parameters:

  • code
  • params
  • callback

OAuth2.get()

Parameters:

  • url
  • access_token
  • callback

https://github.com/ciaranj/node-oauth/blob/master/lib/oauth2.js

The authors of node.js oauth did a great job but currently there are 32 open pull requests (mine is one of them) and it makes me sad. Please let them know that we care about improving Node.js ecosystem of modules and developers community!

UPDATE: Pull request was successfully merged!

Useful Twitter API v1.1 Resources

Just because they are vast and not always easy to find.

Tools

MongoDB migration with Node and Monk

Recently one of our top users complained that their Storify account is unaccessible. We’ve checked the production database and it appeared to be that the account might have been compromised and maliciously deleted by somebody using user’s account credentials. Thanks for a great MongoHQ service we had a backup database in less than 15 minutes.

Recently one of our top users complained that their Storify account was unaccessible. We’ve checked the production database and it appeares to be that the account might have been compromised and maliciously deleted by somebody using user’s account credentials. Thanks to a great MongoHQ service, we had a backup database in less than 15 minutes.
There were two options to proceed with the migration:

  1. Mongo shell script
  2. Node.js program

Because Storify user account deletion involves deletion of all related objects — identities, relationships (followers, subscriptions), likes, stories — we’ve decided to proceed with the latter option. It worked perfectly, and here is a simplified version which you can use as a boilerplate for MongoDB migration (also at gist.github.com/4516139).

Restoring MongoDB Records
Restoring MongoDB Records

Let’s load all the modules we need: Monk, Progress, Async, and MongoDB:

var async = require('async');
var ProgressBar = require('progress');
var monk = require('monk');
var ObjectId=require('mongodb').ObjectID;

By the way, made by LeanBoost, Monk is a tiny layer that provides simple yet substantial usability improvements for MongoDB usage within Node.JS.

Monk takes connection string in the following format:

username:password@dbhost:port/database

So we can create the following objects:

var dest = monk('localhost:27017/storify_localhost');
var backup = monk('localhost:27017/storify_backup');

We need to know the object ID which we want to restore:

var userId = ObjectId(YOUR-OBJECT-ID); 

This is a handy restore function which we can reuse to restore objects from related collections by specifying query (for more on MongoDB queries go to post Querying 20M-Record MongoDB Collection. To call it, just pass a name of the collection as a string, e.g., "stories" and a query which associates objects from this collection with your main object, e.g., {userId:user.id}. The progress bar is needed to show us nice visuals in the terminal.

var restore = function(collection, query, callback){
  console.info('restoring from ' + collection);
  var q = query;
  backup.get(collection).count(q, function(e, n) {
    console.log('found '+n+' '+collection);
    if (e) console.error(e);
    var bar = new ProgressBar('[:bar] :current/:total :percent :etas', { total: n-1, width: 40 })
    var tick = function(e) {
      if (e) {
        console.error(e);
        bar.tick();
      }
      else {
        bar.tick();
      }
      if (bar.complete) {
        console.log();
        console.log('restoring '+collection+' is completed');
        callback();                
      }
    };
    if (n>0){
      console.log('adding '+ n+ ' '+collection);
      backup.get(collection).find(q, { stream: true }).each(function(element) {
        dest.get(collection).insert(element, tick);
      });        
    } else {
      callback();
    }
  });
}

Now we can use async to call the restore function mentioned above:

async.series({
  restoreUser: function(callback){   // import user element
    backup.get('users').find({_id:userId}, { stream: true, limit: 1 }).each(function(user) {
      dest.get('users').insert(user, function(e){
        if (e) {
          console.log(e);
        }
        else {
          console.log('resored user: '+ user.username);
        }
        callback();
      });
    });
  },

  restoreIdentity: function(callback){  
    restore('identities',{
      userid:userId
    }, callback);
  },

  restoreStories: function(callback){
    restore('stories', {authorid:userId}, callback);
  }

  }, function(e) {
  console.log();
  console.log('restoring is completed!');
  process.exit(1);
});

The full code is available at gist.github.com/4516139 and here:

var async = require('async');
var ProgressBar = require('progress');
var monk = require('monk');
var ms = require('ms');
var ObjectId=require('mongodb').ObjectID;

var dest = monk('localhost:27017/storify_localhost');
var backup = monk('localhost:27017/storify_backup');

var userId = ObjectId(YOUR-OBJECT-ID); // monk should have auto casting but we need it for queries

var restore = function(collection, query, callback){
  console.info('restoring from ' + collection);
  var q = query;
  backup.get(collection).count(q, function(e, n) {
    console.log('found '+n+' '+collection);
    if (e) console.error(e);
    var bar = new ProgressBar('[:bar] :current/:total :percent :etas', { total: n-1, width: 40 })
    var tick = function(e) {
      if (e) {
        console.error(e);
        bar.tick();
      }
      else {
        bar.tick();
      }
      if (bar.complete) {
        console.log();
        console.log('restoring '+collection+' is completed');
        callback();                
      }
    };
    if (n>0){
      console.log('adding '+ n+ ' '+collection);
      backup.get(collection).find(q, { stream: true }).each(function(element) {
        dest.get(collection).insert(element, tick);
      });        
    } else {
      callback();
    }
  });
}

async.series({
  restoreUser: function(callback){   // import user element
    backup.get('users').find({_id:userId}, { stream: true, limit: 1 }).each(function(user) {
      dest.get('users').insert(user, function(e){
        if (e) {
          console.log(e);
        }
        else {
          console.log('resored user: '+ user.username);
        }
        callback();
      });
    });
  },

  restoreIdentity: function(callback){  
    restore('identities',{
      userid:userId
    }, callback);
  },

  restoreStories: function(callback){
    restore('stories', {authorid:userId}, callback);
  }

  }, function(e) {
  console.log();
  console.log('restoring is completed!');
  process.exit(1);
});
           

To launch it, run npm install/update and change hard-coded database values.

Querying 20M-Record MongoDB Collection

Storify saves a lot of meta data about social elements: tweets, Facebook status updates, blog posts, news articles, etc. MongoDB is great for storing such unstructured data but last week I had to fix some inconsistency in 20-million-record Elements collection.

Storify saves a lot of meta data about social elements: tweets, Facebook status updates, blog posts, news articles, etc. MongoDB is great for storing such unstructured data but last week I had to fix some inconsistency in 20-million-record Elements collection.

The script was simple: find elements, see if there are no dependencies, delete orphan elements, neveretheless it was timing out or just becoming unresponsive. After a few hours of running different modifications I came up with the working solution.

Here are some of the suggestions when dealing with big collections on Node.js + MongoDB stack:

Befriend Shell

Interactive shell, or mongo, is a good place to start. To launch it, just type mongo in your terminal window:

$ mongo

Assuming you have correct paths set-up during your MongoDB installation, the command will start the shell and present angle brace.

>

Use JS files

To execute JavaScript file in a Mongo shell run:

$ mongo fix.js --shell

Queries look the same:

db.elements.find({...}).limit(10).forEach(printjson);

To output results use:

print();

or

printjson();

To connect to a database:

db = connect("<host>:<port>/<dbname>")

Break Down

Separate your query into a few scripts with smaller queries. You can output each script to a file (as JSON or CSV) and then look at the output and see if your script is doing what it is actually supposed to do.

To execute JavaScript file (fix.js) and output results into another file (fix.txt) instead of the screen, use:

$ mongo fix.js > fix.txt --shell

or

$ mongo --quiet fix.js > fix.txt --shell

Check count()

Simply run count() to see the number of elements in the collection:

 db.collection.count();

or a cursor:

 db.collection.find({…}).count();

Use limit()

You can apply limit() function to your cursor without modifying anything else in a script to test the output without spending too much time waiting for the whole result.

For example:

 db.find({…}).limit(10).forEach(function() {…});

or

 db.find({…}).limit(1).forEach(function() {…});

is better than using:

 db.findOne({…})

because findOne() returns single document while find() and limit() still returns a cursor.

Hit Index

hint() index will allow you to manually use particular index:

 db.elemetns.find({…}).hint({active:1, status:1, slug:1});

Make sure you have actual indexes with ensureIndex():

 db.collection.ensureIndex({…})

Narrow Down

Use additional criteria such as $ne, $where, $in, e.g.:

db.elements.find({ $and:[{type:'link'}
  ,{"source.href":{$exists:true}}
  ,{'date.created':{$gt: new Date("November 30 2012")}}
  ,{$where: function () {
    if (this.meta&&this.data&&this.data&&this.data.link) {
      return this.meta.title!=this.data.link.title;
    } else {
      return false;
    }}} 
  , {'date.created': {$lt: new Date("December 2 2012")}}]}).forEach(function(e, index, array){
    print(e._id.str);
    });

My First Week At Storify

Last week I joined Storify — a destination for curated social media news. Storify helps you sort through the noise to find the voices online that matter.

Last week I joined Storify — a destination for curated social media news. Storify helps you sort through the noise to find the voices online that matter. To find more about Storify take a look at the guided tour.

Storify co-founder Burt and I met a couple months ago for the first time and I’m glad that we did. There were three main reasons for me to come on board: great team, awesome product and company vision, and cool tech stack that I’m passionate about: Node.js+Express+MongoDB.

Storify on Nodejs.org
Storify on Nodejs.org

The first week at Storify exceeded my expectations! So far there were: 4 team lunches, one birthday party, two (!) break-ins. In addition, I’ve worked on the front-page on my second day and had a chance to SSH to production servers.

A few word about the office, besides free snacks and espresso and being close to everything, there are two other startups, Buffer and HomeLight. The funny thing is that I’ve discovered and fallen in love with Buffer just a few weeks ago and now I’ve met with Leo and sit next to their brilliant team!

By the way, Storify is hiring bright minds: Operations Engineer and Front-End Engineer. If you want to do work on interesting things check out full job description.