This essay was inspired by the Kyle Simpson’s series of books, You Don’t Know JavaScript. They are a good start with JavaScript fundamentals. Node is mostly JavaScript except for a few differences which I’ll highlight in this essay. The code is in the You Don’t Know Node GitHub repository under the code
folder.
Why care about Node? Node is JavaScript and JavaScript is almost everywhere! What if the world can be a better place if more developers master Node? Better apps equals better life!
This is a kitchen sink of subjectively the most interesting core features. The key takeaways of this essay are:
- Event loop: Brush-up on the core concept which enables non-blocking I/O
- Global and process: How to access more info
- Event emitters: Crash course in the event-based pattern
- Streams and buffers: Effective way to work with data
- Clusters: Fork processes like a pro
- Handling async errors: AsyncWrap, Domain and uncaughtException
- C++ addons: Contributing to the core and writing your own C++ addons
Event Loop
We can start with event loop which is at the core of Node.
It allows processing of other tasks while IO calls are in the process. Think Nginx vs. Apache. It allows Node to be very fast and efficient because blocking I/O is expensive!
Take look at this basic example of a delayed println
function in Java:
System.out.println("Step: 1");
System.out.println("Step: 2");
Thread.sleep(1000);
System.out.println("Step: 3");
It’s comparable (but not really) to this Node code:
console.log('Step: 1')
setTimeout(function () {
console.log('Step: 3')
}, 1000)
console.log('Step: 2')
It’s not quite the same though. You need to start thinking in the asynchronous way. The output of the Node script is 1, 2, 3, but if we had more statements after “Step 2”, they would have been executed before the callback of setTimeout
. Look at this snippet:
console.log('Step: 1')
setTimeout(function () {
console.log('Step: 3')
console.log('Step 5')
}, 1000);
console.log('Step: 2')
console.log('Step 4')
It produces 1, 2, 4, 3, 5. That’s because setTimeout puts it’s callback in the future cycles of the event loop.
Think about event loop as ever spinning loop like a for
or a while
loop. It stops only if there is nothing to execute either now or in the future.
The event loop allows systems to be more effective because now you can do more things while you wait for your expensive input/output task to finish.
This is in contrast to today’s more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node are free from worries of dead-locking the process — there are no locks.
A quick side note: It’s still possible to write blocking code in Node.js. ? Consider this simple but blocking Node.js code:
console.log('Step: 1')
var start = Date.now()
for (var i = 1; i<1000000000; i++) {
// This will take 100-1000ms depending on your machine
}
var end = Date.now()
console.log('Step: 2')
console.log(end-start)
Of course, most of the time, we don’t have empty loops in our code. Spotting synchronous and thus blocking code might be harder when using other people’s modules. For example, core fs
(file system) module comes with two sets of methods. Each pair performs the same functions but in a different way. There are blocking fs
Node.js methods which have the word Sync
in their names:
var fs = require('fs')
var contents = fs.readFileSync('accounts.txt','utf8')
console.log(contents)
console.log('Hello Ruby\n')
var contents = fs.readFileSync('ips.txt','utf8')
console.log(contents)
console.log('Hello Node!')
Results are very predictable even to people new to Node/JavaScript:
data1->Hello Ruby->data2->Hello NODE!
Things change when we switch to asynchronous methods. This is non-blocking Node.js code:
var fs = require('fs');
var contents = fs.readFile('accounts.txt','utf8', function(err,contents){
console.log(contents);
});
console.log('Hello Python\n');
var contents = fs.readFile('ips.txt','utf8', function(err,contents){
console.log(contents);
});
console.log("Hello Node!");
It prints contents last because they will take some time to execute, they are in the callbacks. Event loops will get to them when file reading is over:
Hello Python->Hello Node->data1->data2
So event loop and non-blocking I/O are very powerful, but you need to code asynchronously which is not how most of us learned to code in schools.
Global
When switching to Node.js from browser JavaScript or another programming language, these questions arise:
- Where to store passwords?
- How to create global variables (no
window
in Node)? - How to access CLI input, OS, platform, memory usage, versions, etc.?
There’s a global object. It has certain properties. Some of them are as follows:
global.process
: Process, system, environment information (you can access CLI input, environment variables with passwords, memory, etc.)global.__filename
: File name and path to the currently running script where this statement isglobal.__dirname
: Absolute path to the currently running scriptglobal.module
: Object to export code making this file a moduleglobal.require()
: Method to import modules, JSON files, and folders
Then, we’ve got the usual suspects, methods from browser JavaScript:
global.console()
global.setInterval()
global.setTimeout()
Each of the global properties can be accessed with capitalized name GLOBAL
or without the namespace at all, e.g., process
instead of global.process
.
Process
Process object has a lot of info so it deserves its own section. I’ll list only some of the properties:
process.pid
: Process ID of this Node instanceprocess.versions
: Various versions of Node, V8 and other componentsprocess.arch
: Architecture of the systemprocess.argv
: CLI argumentsprocess.env
: Environment variables
Some of the methods are as follows:
process.uptime()
: Get uptimeprocess.memoryUsage()
: Get memory usageprocess.cwd()
: Get current working directory. Not to be confused with__dirname
which doesn’t depend on the location from which the process has been started.process.exit()
: Exit current process. You can pass code like 0 or 1.process.on()
: Attach an event listener, e.g., `on(‘uncaughtException’)
Tough question: Who likes and understands callbacks? ?
Some people love callbacks too much so they created http://callbackhell.com. If you are not familiar with this term yet, here’s an illustration:
fs.readdir(source, function (err, files) {
if (err) {
console.log('Error finding files: ' + err)
} else {
files.forEach(function (filename, fileIndex) {
console.log(filename)
gm(source + filename).size(function (err, values) {
if (err) {
console.log('Error identifying file size: ' + err)
} else {
console.log(filename + ' : ' + values)
aspect = (values.width / values.height)
widths.forEach(function (width, widthIndex) {
height = Math.round(width / aspect)
console.log('resizing ' + filename + 'to ' + height + 'x' + height)
this.resize(width, height).write(dest + 'w' + width + '_' + filename, function(err) {
if (err) console.log('Error writing file: ' + err)
})
}.bind(this))
}
})
})
}
})
Callback hell is hard to read, and it’s prone to errors. How do we modularize and organize asynchronous code, besides callbacks which are not very developmentally scalable?
Event Emitters
To help with callback hell, or pyramid of doom, there’s Event Emitters. They allow to implement your asynchronous code with events.
Simply put, event emitter is something that triggers an event to which anyone can listen. In node.js, an event can be described as a string with a corresponding callback.
Event Emitters serve these purposes:
- Event handling in Node uses the observer pattern
- An event, or subject, keeps track of all functions that are associated with it
- These associated functions, known as observers, are executed when the given event is triggered
To use Event Emitters, import the module and instantiate the object:
var events = require('events')
var emitter = new events.EventEmitter()
After that, you can attach event listeners and trigger/emit events:
emitter.on('knock', function() {
console.log('Who\'s there?')
})
emitter.on('knock', function() {
console.log('Go away!')
})
emitter.emit('knock')
Let’s make something more useful with EventEmitter
by inhering from it. Imagine that you are tasked with implementing a class to perform monthly, weekly and daily email jobs. The class needs to be flexible enough for the developers to customize the final output. In other words, whoever consumes this class need to be able to put some custom logic when the job is over.
The diagram below explains the that we inherit from events module to create Job
and then use done
event listener to customize the behavior of the Job
class:
The class Job
will retain its properties, but will get events as well. All we need is to trigger the done
when the process is over:
// job.js
var util = require('util')
var Job = function Job() {
var job = this
// ...
job.process = function() {
// ...
job.emit('done', { completedOn: new Date() })
}
}
util.inherits(Job, require('events').EventEmitter)
module.exports = Job
Now, our goal is to customize the behavior of Job
at the end of the task. Because it emits done
, we can attach an event listener:
// weekly.js
var Job = require('./job.js')
var job = new Job()
job.on('done', function(details){
console.log('Job was completed at', details.completedOn)
job.removeAllListeners()
})
job.process()
There are more features to emitters:
emitter.listeners(eventName)
: List all event listeners for a given eventemitter.once(eventName, listener)
: Attach an event listener which fires just one time.emitter.removeListener(eventName, listener)
: Remove an event listener.
The event pattern is used all over Node and especially in its core modules. For this reason, mastering events will give you a great bang for your time.
Streams
There are a few problems when working with large data in Node. The speed can be slow and the buffer limit is ~1Gb. Also, how do you work if the resource is continuous, in was never designed to be over? To overcome these issues, use streams.
Node streams are abstractions for continuos chunking of data. In other words, there’s no need to wait for the entire resource to load. Take a look at the diagram below showing standard buffered approach:
We have to wait for the entire buffer to load before we can start processing and/or output. Now, contrast it with the next diagram depicting streams. In it, we can process data and/or output it right away, from the first chunk:
You have four types of Streams in Node:
- Readable: You can read from them
- Writable: You can write to them
- Duplex: You can read and write
- Transform: You use them to transform data
Streams are virtually everywhere in Node. The most used stream implementations are:
- HTTP requests and responses
- Standard input/output
- File reads and writes
Streams inherit from the Event Emitter object to provide observer pattern, i.e., events. Remember them? We can use this to implement streams.
Readable Stream Example
An example of a readable stream would be process.stdin
which is a standard input stream. It contains data going into an application. Input typically comes from the keyboard used to start the process.
To read data from stdin
, use the data
and end
events. The data
event’s callback will have chunk
as its argument:
process.stdin.resume()
process.stdin.setEncoding('utf8')
process.stdin.on('data', function (chunk) {
console.log('chunk: ', chunk)
})
process.stdin.on('end', function () {
console.log('--- END ---')
})
So chunk
is then input fed into the program. Depending on the size of the input, this event can trigger multiple times. An end
event is necessary to signal the conclusion of the input stream.
Note: stdin
is paused by default, and must be resumed before data can be read from it.
Readable streams also have read()
interface which work synchronously. It returns chunk
or null
when the stream has ended. We can use this behavior and put null !== (chunk = readable.read())
into the while
condition:
var readable = getReadableStreamSomehow()
readable.on('readable', () => {
var chunk
while (null !== (chunk = readable.read())) {
console.log('got %d bytes of data', chunk.length)
}
})
Ideally, we want to write asynchronous code in Node as much as possible to avoid blocking the thread. However, data chunks are small, so we don’t worry about blocking thread with synchronous readable.read()
.
Writable Stream Example
An example of a writable stream is process.stdout
. The standard output streams contain data going out of an application. Developers can write to the stream with the write
operation.
process.stdout.write('A simple message\n')
Data written to standard output is visible on the command line just like when we use console.log()
.
Pipe
Node provides developers with an alternative to events. We can use pipe()
method. This example reads from a file, compresses it with GZip, and writes the compressed data to a file:
var r = fs.createReadStream('file.txt')
var z = zlib.createGzip()
var w = fs.createWriteStream('file.txt.gz')
r.pipe(z).pipe(w)
Readable.pipe()
takes a writable stream and returns destination, therefore we can chain pipe()
methods one after another.
So you have a choice between events and pipes when you use streams.
HTTP Streams
Most of us use Node to build web apps either traditional (think server) or RESTful APi (think client). So what about an HTTP request? Can we stream it? The answer is a resounding yes.
Request and response are readable and writable streams and they inherit from event emitters. We can attach a data
event listener. In its callback, we’ll receive chunk
, we can transform it right away without waiting for the entire response. In this example, I’m concatenating the body
and parsing it in the callback of the end
event:
const http = require('http')
var server = http.createServer( (req, res) => {
var body = ''
req.setEncoding('utf8')
req.on('data', (chunk) => {
body += chunk
})
req.on('end', () => {
var data = JSON.parse(body)
res.write(typeof data)
res.end()
})
})
server.listen(1337)
Note: ()=>{}
is ES6 syntax for fat arrow functions while const
is a new operator. If you’re not familiar with ES6/ES2015 features and syntax yet, refer to the article,
Top 10 ES6 Features Every Busy JavaScript Developer Must Know.
Now let’s make our server a bit more close to a real life example by using Express.js. In this next example, I have a huge image (~8Mb) and two sets of Express routes: /stream
and /non-stream
.
server-stream.js:
app.get('/non-stream', function(req, res) {
var file = fs.readFile(largeImagePath, function(error, data){
res.end(data)
})
})
app.get('/stream', function(req, res) {
var stream = fs.createReadStream(largeImagePath)
stream.pipe(res)
})
I also have an alternative implementation with events in /stream2
and synchronous implementation in /non-stream2
. They do the same thing when it comes to streaming or non-streaming, but with a different syntax and style. The synchronous methods in this case is more performant because we are only sending one request, not concurrent requests.
To launch the example, run in your terminal:
$ node server-stream
Then open http://localhost:3000/stream and http://localhost:3000/non-stream in Chrome. The Network tab in DevTools will show you headers. Compare X-Response-Time
. In my case, it was order of magnitude lower for /stream
and /stream2
:300ms vs. 3–5s.
Your result will vary, but the idea is that with stream, users/clients will start getting data earlier. Node streams are really powerful! There are some good stream resources to master them and become a go-to streams expert in your team.
[Stream Handbook](https://github.com/substack/stream-handbook] and stream-adventure which you can install with npm:
$ sudo npm install -g stream-adventure
$ stream-adventure
Buffers
What data type can we use for binary data? If you remember, browser JavaScript doesn’t have a binary data type, but Node does. It’s called buffer. It’s a global object, so we don’t need to import it as module.
To create binary data type, use one of the following statements:
Buffer.alloc(size)
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(str[, encoding])
The official Buffer docs list all the methods and encodings. The most popular encoding is utf8
.
A typical buffer will look like some gibberish so we must convert it to a string with toString()
to have a human-readable format. The for
loop will create a buffer with an alphabet:
let buf = Buffer.alloc(26)
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97 // 97 is ASCII a
}
The buffer will look like an array of numbers if we don’t convert it to a string:
console.log(buf) // <Buffer 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70 71 72 73 74 75 76 77 78 79 7a>
And we can use toString
to convert the buffer to a string.
buf.toString('utf8') // outputs: abcdefghijklmnopqrstuvwxyz
buf.toString('ascii') // outputs: abcdefghijklmnopqrstuvwxyz
The method takes a starting number and end positions if we need just a sub string:
buf.toString('ascii', 0, 5) // outputs: abcde
buf.toString('utf8', 0, 5) // outputs: abcde
buf.toString(undefined, 0, 5) // encoding defaults to 'utf8', outputs abcde
Remember fs? By default the data
value is buffer too:
fs.readFile('/etc/passwd', function (err, data) {
if (err) return console.error(err)
console.log(data)
});
data
is buffer when working with files.
Clusters
You might often hear an argument from Node skeptics that it’s single-threaded, therefore it won’t scale. There’s a core module cluster
(meaning you don’t need to install it; it’s part of the platform) which allows you to utilize all CPU power of each machine. This will allow you to scale Node programs vertically.
The code is very easy. We need to import the module, create one master and multiple workers. Typically we create as many processes as the number of CPUs we have. It’s not a rule set in stone. You can have as many new processes as you want, but at a certain point the law of diminishing returns kicks in and you won’t get any performance improvement.
The code for master and worker is in the same file. The worker can listen on the same port and send a message (via events) to the master. Master can listen to the events and restart clusters as needed. The way to write code for master is to use cluster.isMaster()
, and for worker it is cluster.isWorker()
. Most of the server the server code will reside in worker (isWorker()
).
// cluster.js
var cluster = require('cluster')
if (cluster.isMaster) {
for (var i = 0; i < numCPUs; i++) {
cluster.fork()
}
} else if (cluster.isWorker) {
// your server code
})
In the cluster.js
example, my server outputs process IDs, so you see that different workers handle different requests. It’s like a load balancer, but it’s not a true load balancer because the loads won’t be distributed evenly. You might see way more requests falling on just one process (the PID will be the same).
To see that different workers serving different requests, use loadtest
which is a Node-based stress (or load) testing tool:
- Install
loadtest
with npm:$ npm install -g loadtest
- Run
code/cluster.js
with node ($ node cluster.js
); leave the server running - Run load testing with:
$ loadtest http://localhost:3000 -t 20 -c 10
in a new window - Analyze results both on the server terminal and the
loadtest
terminal - Press control+c on the server terminal when the testing is over. You should see different PIDs. Write down the number of requests served.
The -t 20 -c 10
in the loadtest
command means there will be 10 concurrent requests and maximum time is 20 seconds.
The core cluster is part of the core and that’s pretty much its only advantage. When you are ready to deploy to production, you might want to use a more advanced process manager:
strong-cluster-control
(https://github.com/strongloop/strong-cluster-control), or$ slc run
: good choicepm2
(https://github.com/Unitech/pm2): good choice
pm2
Let’s cover the pm2
tool which is one of the ways to scale your Node application vertically (one of the best ways) as well as having some production-level performance and features.
In a nutshell, pm2 has these advantages:
- Load-balancer and other features
- 0s reload down-time, i.e., forever alive
- Good test coverage
You can find pm2 docs at https://github.com/Unitech/pm2 and http://pm2.keymetrics.io.
Take a look at this Express server (server.js
) as the pm2 example. There’s no boilerplate code isMaster()
which is good because you don’t need to modify your source code like we did with cluster
. All we do in this server is logpid
and keep stats on them.
var express = require('express')
var port = 3000
global.stats = {}
console.log('worker (%s) is now listening to http://localhost:%s',
process.pid, port)
var app = express()
app.get('*', function(req, res) {
if (!global.stats[process.pid]) global.stats[process.pid] = 1
else global.stats[process.pid] += 1;
var l ='cluser '
+ process.pid
+ ' responded \n';
console.log(l, global.stats)
res.status(200).send(l)
})
app.listen(port)
To launch this pm2
example, use pm2 start server.js
. You can pass the number of the instances/processes to spawn (-i 0
means as many as number of CPUs which is 4 in my case) and the option to log to a file (-l log.txt
):
$ pm2 start server.js -i 0 -l ./log.txt
Another nice thing about pm2 is that it goes into foreground. To see what’s currently running, execute:
$ pm2 list
Then, utilize loadtest
as we did in the core cluster
example. In a new window, run these commands:
$ loadtest http://localhost:3000 -t 20 -c 10
Your results might vary, but I get more or less evenly distributed results in log.txt
:
cluser 67415 responded
{ '67415': 4078 }
cluser 67430 responded
{ '67430': 4155 }
cluser 67404 responded
{ '67404': 4075 }
cluser 67403 responded
{ '67403': 4054 }
Spawn vs Fork vs Exec
Since we’ve used fork()
in the cluter.js
example to create new instances of Node servers, it’s worth mentioning that there are three ways to launch an external process from within the Node.js one. They are spawn()
, fork()
and exec()
, and all three of them come from the core child_process
module. The differences can be summed up in the following list:
require('child_process').spawn()
: Used for large data, supports streams, can be used with any commands, and doesn’t create a new V8 instancerequire('child_process').fork()
– Creates a new V8 instance, instantiates multiple workers and works only with Node.js scripts (node
command)require('child_process').exec()
– Uses a buffer which makes it unsuitable for large data or streaming, works in async manner to get you all the data at once in the callback, and can be used with any command, not justnode
Let’s take a look at this spawn example in which we execute node program.js
, but the command can start bash, Python, Ruby or any other commands or scripts. If you need to pass additional arguments to the command, simply put them as arguments of the array which is a parameter to spawn()
. The data comes as a stream in the data
event handler:
var fs = require('fs')
var process = require('child_process')
var p = process.spawn('node', 'program.js')
p.stdout.on('data', function(data)) {
console.log('stdout: ' + data)
})
From the perspective of the node program.js
command, data
is its standard output; i.e., the terminal output from node program.js
.
The syntax for fork()
is strikingly similar to the spawn()
method with one exception, there is no command because fork()
assumes all processes are Node.js:
var fs = require('fs')
var process = require('child_process')
var p = process.fork('program.js')
p.stdout.on('data', function(data)) {
console.log('stdout: ' + data)
})
The last item on our agenda in this section is exec()
. It’s slightly different because it’s not using event pattern, but a single callback. In it, you have error, standard output and standard error parameters:
var fs = require('fs')
var process = require('child_process')
var p = process.exec('node program.js', function (error, stdout, stderr) {
if (error) console.log(error.code)
})
The difference between error
and stderr
is that the former comes from exec()
(e.g., permission denied to program.js
), while the latter from the error output of the command you’re running (e.g., database connection failed within program.js
).
Handling Async Errors
Speaking of errors, in Node.js and almost all programming languages, we have try/catch
which we use to handle errors. For synchronous errors try/catch works fine.
try {
throw new Error('Fail!')
} catch (e) {
console.log('Custom Error: ' + e.message)
}
Modules and functions throw errors which we catch later. This works in Java and synchronous Node. However, the best Node.js practice is to write asynchronous code so we don’t block the thread.
Event loop is the mechanism which enables the system to delegate and schedule code which needs to be executed in the future when expensive input/output tasks are finished. The problem arises with asynchronous errors because system loses context of the error.
For example, setTimeout()
works asynchronously by scheduling the callback in the future. It’s similar to an asynchronous function which makes an HTTP request, reads from a database or writes to a file:
try {
setTimeout(function () {
throw new Error('Fail!')
}, Math.round(Math.random()*100))
} catch (e) {
console.log('Custom Error: ' + e.message)
}
There is no try/catch
when callback is executed and application crashes. Of course, if you put another try/catch
in the callback, it will catch the error, but that’s not a good solution. Those pesky async errors are harder to handle and debug. Try/catch is not good enough for asynchronous code.
So async errors crash our apps. How do we deal with them? ? You’ve already seen that there’s an error
argument in most of the callbacks. Developers need to check for it and bubble it up (pass up the callback chain or output an error message to the user) in each callback:
if (error) return callback(error)
// or
if (error) return console.error(error)
Other best practices for handling async errors are as follows:
- Listen to all “on error” events
- Listen to
uncaughtException
- Use
domain
(soft deprecated) or AsyncWrap - Log, log, log & Trace
- Notify (optional)
- Exit & Restart the process
on(‘error’)
Listen to all on('error')
events which are emitted by most of the core Node.js objects and especially http
. Also, anything that inherits from or creates an instance of Express.js, LoopBack, Sails, Hapi, etc. will emit error
, because these frameworks extend http
.
js
server.on('error', function (err) {
console.error(err)
console.error(err)
process.exit(1)
})
uncaughtException
Always listen to uncaughtException
on the process
object! uncaughtException
is a very crude mechanism for exception handling. An unhandled exception means your application – and by extension Node.js itself – is in an undefined state. Blindly resuming means anything could happen.
process.on('uncaughtException', function (err) {
console.error('uncaughtException: ', err.message)
console.error(err.stack)
process.exit(1)
})
or
process.addListener('uncaughtException', function (err) {
console.error('uncaughtException: ', err.message)
console.error(err.stack)
process.exit(1)
Domain
Domain has nothing to do with web domains that you see in the browser. domain
is a Node.js core module to handle asynchronous errors by saving the context in which the asynchronous code is implemented. A basic usage of domain
is to instantiate it and put your crashy code inside of the run()
callback:
var domain = require('domain').create()
domain.on('error', function(error){
console.log(error)
})
domain.run(function(){
throw new Error('Failed!')
})
domain
is softly deprecated in 4.0 which means the Node core team will most likely separate domain
from the platform, but there are no alternatives in core as of now. Also, because domain
has strong support and usage, it will live as a separate npm module so you can easily switch from the core to the npm module which means domain
is here to stay.
Let’s make the error asynchronous by using the same setTimeout()
:
// domain-async.js:
var d = require('domain').create()
d.on('error', function(e) {
console.log('Custom Error: ' + e)
})
d.run(function() {
setTimeout(function () {
throw new Error('Failed!')
}, Math.round(Math.random()*100))
});
The code won’t crash! We’ll see a nice error message, “Custom Error” from the domain’s error
event handler, not your typical Node stack trace.
C++ Addons
The reason why Node became popular with hardware, IoT and robotics is its ability to play nicely with low-level C/C++ code. So how do we write C/C++ binding for your IoT, hardware, drone, smart devices, etc.?
This is the last core feature of this essay. Most Node beginners don’t even think you can be writing your own C++ addons! In fact, it’s so easy that we’ll do it from scratch right now.
Firstly, create the hello.cc
file which has some boilerplate imports in the beginning. Then, we define a method which returns a string and exports that method.
#include <node.h>
namespace demo {
using v8::FunctionCallbackInfo;
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void Method(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "capital one")); // String
}
void init(Local<Object> exports) {
NODE_SET_METHOD(exports, "hello", Method); // Exporting
}
NODE_MODULE(addon, init)
}
Even if you are not an expert in C, it’s easy to spot what is happening here because the syntax is not that foreign to JavaScript. The string is capital one
:
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "capital one"));`
And the exported name is hello
:
void init(Local<Object> exports) {
NODE_SET_METHOD(exports, "hello", Method);
}
Once hello.cc
is ready, we need to do a few more things. One of them is to create binding.gyp
which has the source code file name and the name of the addon:
{
"targets": [
{
"target_name": "addon",
"sources": [ "hello.cc" ]
}
]
}
Save the binding.gyp
in the same folder with hello.cc
and install node-gyp
:
$ npm install -g node-gyp
Once you got node-gyp
, run these configuring and building commands in the same folder in which you have hello.cc
and binding.gyp
:
$ node-gyp configure
$ node-gyp build
The commands will create the build
folder. Check for compiled .node
files in build/Release/
.
Lastly, write the create Node.js script hello.js
, and include your C++ addon:
var addon = require('./build/Release/addon')
console.log(addon.hello()) // 'capital one'
To run the script and see our string capital one
, simply use:
$ node hello.js
There are more C++ addons examples at https://github.com/nodejs/node-addon-examples.
Summary
The code to play with is on GitHub. If you liked this post, leave a comment below. If you are interested in Node.js patterns like observer, callback and Node conventions, take a look at my essay Node Patterns: From Callbacks to Observer.
I know it’s been a long read, so here’s a 30-second summary:
- Event loop: Mechanism behind Node’s non-blocking I/O
- Global and process: Global objects and system information
- Event Emitters: Observer pattern of Node.js
- Streams: Large data pattern
- Buffers: Binary data type
- Clusters: Vertical scaling
- Domain: Asynchronous error handling
- C++ Addons: Low-level addons
Most of Node is JavaScript except for some core features which mostly deal with system access, globals, external processes and low-level code. If you understand these concepts (feel free to save this article and re-read it a few more times), you’ll be on a quick and short path to mastering Node.js.
One of the finest blogs on node, in my beginner’s node journey.
Great article, I really learned a lot and wil try to implement some concepts to improve my existing node projects, thanks ! :)
I like the terse nature of C++ and Java however like the tolerant nature of JavaScript.
thx for the intro! i will definitely use node (more than npm that i already used) – you answered some questions i had, by just putting it on my table today :-)
Great Article!
Learned a lot.
Nice opening pic!