Find Me on Medium December 24, 2016

I will continue blogging on medium. I will remove this blog soon and revamp this site.

Understanding node's possible eventemitter leak error message April 25, 2015

In node.js and io.js, you'll eventually see this error message:

(node) warning: possible EventEmitter memory leak detected. 11 a listeners added. Use emitter.setMaxListeners() to increase limit.

When would a leak actually occur?

A leak occurs when you continuously add event handlers without removing them. This particular happens when you use a single emitter instance numerous times. Let's make a function that returns the next value in a stream:

function next(stream) {
  // if the stream has data buffered, return that
  {
    let data = stream.read()
    if (data) return Promise.resolve(data)
  }

  // if the stream has already ended, return nothing
  if (!data.readable) return Promise.resolve(null)

  // wait for data
  return new Promise(function (resolve, reject) {
    stream.once('readable', () => resolve(stream.read()))
    stream.on('error', reject)
    stream.on('end', resolve)
  })
}

Every time you call next() on stream, you add a handler on readable, error, and end. On the 11th next(stream) call, you'll get the error message:

(node) warning: possible EventEmitter memory leak detected. 11 a listeners added. Use emitter.setMaxListeners() to increase limit.

You've continuously added handlers to error and end, but have not removed them, even if data was successfully read and those handlers are no longer relevant.

Cleaning up your event handlers

The correct way to clean up your handlers is to make sure that after the promise resolves, a net of 0 event handlers are added:

return new Promise(function (resolve, reject) {
  stream.on('readable', onreadable)
  stream.on('error', onerror)
  stream.on('end', cleanup)

  // define all functions in scope
  // so they can be referenced by cleanup and vice-versa
  function onreadable() {
    cleanup()
    resolve(stream.read())
  }

  function onerror(err) {
    cleanup()
    reject(err)
  }

  function cleanup() {
    // remove all event listeners created in this promise
    stream.removeListener('readable', onreadable)
    stream.removeListener('error', onerror)
    stream.removeListener('end', cleanup)
  }
})

With this method, there will be no event emitter leak as after every promise resolves, the net change events handlers is 0.

Concurrent handlers

What if you want multiple listeners on the same emitter? For example, you may have a lot of functions listening to the same emitter:

doThis1(stream)
doThis2(stream)
doThis3(stream)
doThis4(stream)
doThis5(stream)
doThis6(stream)
doThis7(stream)
doThis8(stream)
doThis9(stream)
doThis10(stream)
doThis11(stream)
doThis12(stream)
doThis13(stream)

If all the functions above add handlers to the data event, you're going to get the same leak error message, but you know there isn't an actual leak. At this point, you should set the maximum number of listeners accordingly:

return new Promise(function (resolve, reject) {
  // increase the maximum number of listeners by 1
  // while this promise is in progress
  stream.setMaxListeners(stream.getMaxListeners() + 1)
  stream.on('readable', onreadable)
  stream.on('error', onerror)
  stream.on('end', cleanup)

  function onreadable() {
    cleanup()
    resolve(stream.read())
  }

  function onerror(err) {
    cleanup()
    reject(err)
  }

  function cleanup() {
    stream.removeListener('readable', onreadable)
    stream.removeListener('error', onerror)
    stream.removeListener('end', cleanup)
    // this promise is done, so we lower the maximum number of listeners
    stream.setMaxListeners(stream.getMaxListeners() - 1)
  }
})

This allows you to acknowledge the limit and keep your event handling in control, while allowing node.js to print an error message if an actual leak occurred.

Help write better code!

If you simply .setMaxListener(0), then you may be unknowingly leaking. If you see any code (especially open source) that uses .setMaxListeners(0), make a pull request to fix it! Don't take shortcuts!

Wildcard Routing is an Anti-Pattern April 21, 2015

Probably the most cringy pattern I see when creating apps is wildcard routing. With the next version of Express changing support for wildcards, many people have already begun complaining.

const error = require('http-errors')

app.use('/thing/:id/*', function (req, res, next) {
  Thing.get(req.params.id).then(function (user) {
    if (!user) throw error(404)
    req.user = user
  }).then(next, next)
})

What if the route doesn't exist?

What if a user GET /thing/:id/lakjsdflkajsdlkfjasdf? What would actually happen is:

  1. Match /thing/:id/*
  2. Get req.user
  3. Look for the next match
  4. Return 404 (because there are no matches)

Your app has executed an extra database call when it didn't have to:

  1. Match all routes
  2. Return 404 because none match

Except this doesn't actually really work with wildcard routing.

What if it's more than just a database call?

If you add a multipart parser that downloads disks to files like this:

app.use(bodyParser.multipart())

If an attacker simply posts a bunch of files to ANY route, your server would quickly be flooded with files and run out of disk. This is one of the main reasons the multipart parser was removed from Connect and Express. Because implementing multipart is controversial, it's better not to provide the parser at all.

Only define routes that are actually served

Your routes should really just look like:

app.route('/things')
.get()
.post()
app.route('/thing/:id([0-9a-f]{24})/comments')
.get()
.post()

The router itself knows whether a path 404s and resolves the routes accordingly. Unfortunately, this is annoying to do as your app will quickly look like this:

app.route('/things')
.post(authenticate(), bodyParser.multipart(), function (req, res, next) {
  // only download files af
})

app.route('/things/:id')
.patch(authenticateUser, getThing, bodyParser.json(), function (req, res, next) {
  // update user
})
.get(getThing, function (req, res, next) {
  res.json(req.thing)
})

The Koa way

It becomes very complex and ugly to route properly Express because of the extensive use of callbacks. It's simply not an option to have every route have 10 nested callbacks. But with Koa or any future async/await framework, things will be much easier when your code looks like this:

const parse = require('co-body')

// NOTE: koa doesn't actually provide a router
app.post('/things', function* (next) {
  let user = yield authenticate(this)
  this.assert(user, 401, 'You must be signed in!')
  this.assert(user.permission.post, 403, 'You cannot post!')
  let body = yield parse(this)
  this.assert(body, 400, 'Invalid body!')
  this.assert(body.name, 422, 'Invalid name!')
  let thing = yield Thing.create(body)
  this.body = thing
})

Conclusion

Try to avoid using wildcard routing. If you're using Express, prepare for some possible breaking changes in 5.x. If you're using a trie router, don't even bother with it.

Generator and Promise Tricks July 18, 2014

When ditching callbacks for promises or generators, life suddenly becomes much easier. You don't have to worry as much about error handling. There are no pyramids of doom. But at first, you'll be confused with how promises and generators work, and finally, you'll be able to use them with expertise!

.map(function* () {})

Some people, including me until recently, see function* () {} as a magical function. They don't correlate function* () {} with regular functions. However, function* () {} is in fact a regular function! The only real difference between function* () {} and function () {} is that the former returns a generator. That means you can pass function* () {} basically anywhere a regular function can go, but just make sure you realize that a generator is returend.

For example, to execute in parallel, you might do something like this:

co(function* () {
  var values = [] // some values
  yield values.map(function (x) {
    return somethingAsync(x)
  })
})()

function* somethingAsync(x) {
  // do something async
  return y
}

When you can just do:

co(function* () {
  var values = []
  yield values.map(somethingAsync)
})

Similarly, you can return promises!

co(function* () {
  var values = []
  var mappedValues = yield values.map(toPromise)
})()

function toPromise(x) {
  return new Promise(function (resolve, reject) {
    resolve(x)
  })
}

Wrapping Generators

As such, you could wrap generator functions in regular functions.

function* wrappedFn(x) {
  return yield* somethingAsync(x)
}

Can really just be:

function wrappedFn(x) {
  return somethingAsync(x)
}

And in both cases, you'll be able to do yield* wrappedFn().

.then = (resolve, reject) =>

One thing I like about the promise specification is that you could literally convert anything into a promise. A promise is just any object with .then().

For example, suppose you have an object stream, and you want .then() to exhaust the stream and return all the objects of the stream.

var toArray = require('stream-to-array')
stream.then = function (resolve, reject) {
  return toArray(this).then(resolve, reject)
}

Now you can simply do stream.then() or even yield stream!

co(function* () {
  var docs = yield stream
})()

Keep in mind that you need to pass resolve and reject to the real .then() method, which you're just proxying to.

Even better, you can create an iterator out of this!

function Iterator() {
  this.i = 0
}

Iterator.prototype.then = function (resolve, reject) {
  return Promise.resolve(this.i++).then(resolve, reject)
}

co(function* () {
  var iterator = new Iterator()
  var i
  while (100 > i = yield iterator) {
    // do something
  }
})()

Basically, just tack on a .then() method to any object you'd like, and you've created a 'yieldable'!

Semver has failed us June 19, 2014

As a maintainer of many popular packages such as express, I, along with the other maintainers, have come to realized that semantic versioning simply does not work in practice. It has caused a lot of bikeshedding, creating non-constructive discussions in the repository. It generally makes development actually more difficult.

Recently, some people have created some ideas about how to improve versioning. I first proposed getting rid of versions < 1 all together as 0.x.x modules are inherently unstable due to its lack of specification. Damon Oehlman proposed slimver, a stricter variant of semantic versioning.

I'm proposing ferver, which changes the semantics of semver to be more practical with breaking changes. You can read more about it on the GitHub page, but first, let's talk about the failures of semver.

Patches break

Part of the major issues with semver is that patchs could break. A feature could be "fixed", but a consumer could have relied on that very buggy feature, and patching it would break their app. An example are times when a library simply does not behave according to specifications and fixes it to cohere to the specification.

An example is with Express 3.4.3. A bug with redirects was fixed, but some users were relying on that bug. Thus, even though it was a patch (3.4.2 -> 3.4.3), it broke some people's apps. A user asked to at least bump a minor version, but if we strictly cohere to semver, we can't because it's a patch, not a new feature.

I greatly sympathize with this user, and this particular case is essentially the first time I realized, "semver doesn't work".

0.x.x is anarchy

Versions < 1.0.0 do not have semantic versions according to semver. These are considered libraries "in development" and developers could version however they see fit. Thus, developers bump the minor or patch numbers however they like. It's anarchy.

The problem isn't that versions less than < 1.0.0 are allowed. Nope, it makes sense for libraries to be able to break changes before declaring a stable 1.0.0. The problem is that there are no semantics to 0.y.z versions. Consumers simply don't know how to depend on these modules using version ranges without introducing a significant amount of risk into their app.

Pinned dependencies

The current solution to the above problems is to pin all dependencies. However, this is absolutely stupid and annoying to me. If you're pinning versions, you're reducing the package manager to a glorified CURL.

It makes maintaining very difficult and annoying. Duplicate dependencies are bad, especially in frontend development where file size matters, which is one reason frontend developers prefer Bower's flatter dependency directory vs. npm's nested. When every library pins, you're going to have a lot of duplicate dependencies, even if they're the same version! Not everyone has the time to update every patch and make a new release.

Even if you control the dependencies, some people like pinning dependencies. To me, this is absolutely silly, but it is necessary because patchs can break. For example, if you look at the 2.x branch of Connect, you'll see that all the commits are just dependency updates. However, they all do not break backwards compatibility because Connect coheres to semver. These updates should not be necessary and should be available just from typing npm update.

Slower development

If you look at Express' current issues, you'll see that half of them are planned for the next major version, 5.0.0:

Express Issues

The problem with this is that these minor issues may be backwards incompatible, but they are nevertheless issues that consumers would have to deal with until v5.0.0.

For example, Update path regexp functionality would break routes for a very few people who write really weird routes, but it introduces many new features and provides better semantics. This introduces a lot of benefits for most developers while introducing risk to a very few developers.

Ideally, these changes should happen as fast as possible, but in a way that tells consumers, "Hey, this is new and improved, but it might break your app. Proceed with caution.". There's no way to say that with semver except with major version releases.

Prereleases

Semver does not have a good scheme for prereleases. People append all sorts of weird strings to their versions. 1.0.0-beta1. 1.0.0-3.2.3.1. Who knows what these mean. It only makes libraries more difficult to consume as well as confuse consumers.

Since semver is liberal with these affixes, package managers like npm have trouble dealing with them. For example, if you use 1.5.0-beta1 of a library and the latest version is 1.4.0, npm outdated will mark 1.5.0-beta1 as outdated. Yeah, I don't think that's outdated.

A good versioning system would allow for prereleases and beta builds while still cohering to x.y.z versioning. It would also be able to allow consumers to distinguish between prereleases and releases semantically.

The fear of x.0.0

Many developers never release v1.0.0 of their projects. With semver, this is really annoying because you have to pin to reduce the risk of breaking changes.

But others absolutely hate when libraries update the major version. They see it as a sign of "instability", but according to semver, these backwards-incompatible changes could be something as insignificant as returning null instead of undefined, which wouldn't break most people's apps.

The problem here is that people don't associate major versions with "breaking changes". They associate it with the character, purpose, and philosophy of a library. 0.x.x means "We don't know what we're doing". 1.0.0 means "We think we know the direction of this library.". 2.0.0 means "We're changing directions a little bit". 3.0.0 means "We're changing directions a little bit, again".

Semver simply has the wrong semantics. Not every breaking change is a fundamental difference in a library's character. People are okay if you break things here and there, but it must be easy for them to know when you break something. This can only be done with a major version bump with semver.

Solving semver

There are two ways to solve semver: bump major versions often, or use a different versioning scheme.

Currently, I release most new modules I write as 1.0.0 and liberally bump major versions. For example, koa-session is already at 2.0.0, but koa hasn't even reached 1.0.0. Hell, co has already reached 3.0.6 and ES6 isn't even finalized.

This is why I proposed having semver drop versions < 1. Who cares if you're at version 36 like Chrome. I just want to know if something would break! But this is not suitable for most people since, due to semver's semantics, they correlate a lot of major version bumps with the library being "unstable".

The other solution is just use a different versioning scheme. This is what ferver - versioning based on whether a change is breaking. Please, don't use it though. It's only a thought.

Why You Should and Shouldn't Use Koa May 5, 2014

There's a new node.js framework in town, and its name is Koa. It's the spiritual successor to Connect and Express, written by the same author, TJ Holowaychuk. It has a very similar middleware system, but is completely incompatible with any other node.js framework.

Koa is bleeding edge and has not yet reached version 1.0, but many people including TJ and myself have already ditched Express for Koa. TJ himself has stepped back from maintaining Connect and Express and has instead delegated maintenance to a team, myself included. Don't worry about using Connect or Express, they will still be maintained!

So why should you and shouldn't you ditch Express for Koa like TJ and I have?

Why you should

Superior, callback-less control flow

Thanks to Koa's underlying generator engine co, there's no more callback hell. Of course, this is assuming you write your libraries using generators, promises, or return thunks.

But co's control flow handling isn't about eliminating callbacks. You can also execute multiple asynchronous tasks in parallel and in series without calling a function.

app.use(function* () {
  yield [fn1, fn2, fn3]
})

Bam! You've just executed three asynchronous functions in parallel. You've eliminated the need for any other control flow library such as async, and you don't have to require() anything.

Superior middleware error handling

Thanks to co, you can simply use try/catch instead of node's if (err) callback(err) type error handling. You can see this in the error handling examples in Koa:

app.use(function* (next) {
  try {
    yield* next
  } catch (err) {
    console.error('an error occured! writing a response')
    this.response.status = 500
    this.response.body = err.message
  }
})

Instead of adding an error handling middleware via app.use(function (err, req, res, next) {}) which barely works correctly, you can finally simply use try/catch. All errors will be caught, unless you throw errors on different ticks like so:

app.use(function* () {
  setImmediate(function () {
    throw new Error('this will not be caught by koa '
      + 'and will crash your process')
  })
})

Don't do that! Write your code in generators, promises, or unnested callbacks and you'll be fine.

Superior stream handling

Suppose you want to stream a file to the response with gzip. In vanilla node.js, it'll look like this:

http.createServer(function (req, res) {
  // set the content headers
  fs.createReadStream('filename.txt')
  .pipe(zlib.createGzip())
  .pipe(res)
})

However, you haven't handled any errors. It should look more like this:

http.createServer(function (req, res) {
  // set the content headers
  fs.createReadStream('filename.txt')
  .on('error', onerror)
  .pipe(zlib.createGzip())
  .on('error', onerror)
  .pipe(res)

  function onerror(err) {
    console.error(err.stack)
  }
})

But if you used this method, you'll still get memory leaks when clients abort the request. This is because close events on the final destination stream are not propagated through the pipe()s back to the original stream. You need to use something like finished, otherwise you'll leak file descriptors. Thus, your code should look more like:

http.createServer(function (req, res) {
  var stream = fs.createReadStream('filename.txt')

  // set the content headers
  stream
  .on('error', onerror)
  .pipe(zlib.createGzip())
  .on('error', onerror)
  .pipe(res)

  finished(res, function () {
    // make sure the stream is always destroyed
    stream.destroy()
  })
})

Since you've handled all your errors, you wouldn't need to use domains. But look at it. It's so much code just to send a file. Express also does not handle the close event, so you'll always need to use finished as well.

app.use(require('compression')())
app.use(function (req, res) {
  // set content headers
  var stream = fs.createReadStream('filename.txt')
  stream.pipe(res)
  finished(res, function () {
    stream.destroy()
  })
})

How would this look like in Koa?

app.use(require('koa-compress')())
app.use(function* () {
  this.type = 'text/plain'
  this.body = fs.createReadStream('filename.txt')
})

Since you simply pass the stream to Koa instead of directly piping, Koa is able to handle all these cases for you. You won't need to use domains as no uncaught exceptions will ever be thrown. Don't worry about any leaks as Koa handles that for you as well. You may treat streams essentially the same as strings and buffers, which is one of the main philosophies behind Koa's abstractions.

In other words, Koa tries to fix all of node's broken shit. For example, this case is not handled:

app.use(function* () {
  this.body = fs.createReadStream('filename.txt')
    .pipe(zlib.createGzip())
})

Don't ever use .pipe() unless you know what you're doing. It's broken. Let Koa handle streams for you.

Concise code

Writing apps and middleware for Koa is generally much more concise than any other framework. There are many reasons for this.

The first and obvious reason is the use of generators to remove callbacks. You're no longer creating functions everywhere, just yielding. There's no more nested code to deal with.

Many of the small HTTP utilities in the expressjs organization are included with Koa, so when writing applications and middleware, you don't need to install many third party dependencies.

The last and I think the most important reason is that Koa abstract's node's req and res objects, avoiding any "hacks" required to make things work.

Better written middleware

Part of what makes Connect and Express great is its middleware ecosystem. But what I greatly disliked about this ecosystem was that middleware are generally terrible. There are many reasons for this aside from the inverse of the points above.

Express is similar to Koa in that many utilities are included. This should make writing middleware for Express almost as easy as Koa, but if you're writing middleware for Express, you might as well make it compatible with node.js and any other app.use(function (req, res, next) {}) framework. Supporting only Express at that point is silly. However, you'll end up with a lot of tiny dependencies, which is annoying. Koa middleware on the other hand is completely incompatible with node.js.

Express uses node's original req and res objects. Properties have to be overwritten for middleware to work properly. For example, if you look at the compression middleware, you'll see that res.write() and res.end() are being overwritten. In fact, a lot of middleware are written like this. And it's ugly.

Thanks to Koa's abstraction of node's req and res objects, this is not a problem. Look at koa-compress source code and tell me which one is more concise and readable. Unlike Express, the compression stream's errors are actually handled as well and pipe() is actually used internally.

Then there's the fact that asynchronous functions' errors are simply logged instead of handled. Developers are not even given a choice. This is not a problem with Koa! You can handle all the errors!

Although we're going to have to recreate the middleware ecosystem for Koa from the ground up, I believe that all Koa middleware are fundamentally better than any other frameworks'.

Why you shouldn't

Generators are confusing

There are two programming concepts you have to learn to get started with Koa. First is generators, obviously. But generators are actually quite complicated. In fact, any control flow mechanism, including promises, is going to be confusing for beginners. Unlike promises, co is not based on a specification, so you have to learn both how generators work as well as co.

You also need to understand how this works. It becomes much more important when Koa uses this to pass data instead of node's req and res objects. You may want to read yield next vs yield* next.

Generators are not supported out of the box

There are currently two ways to use generators in node.js.

The first is to use v0.11, an unstable version of node, with the --harmony-generators flag, but you're obviously using an unstable version of node.js. For many people and companies, this is unacceptable, especially since many C/C++ addons don't work with v0.11 yet. Since you need to explicitly set the --harmony-generators flag, creating and using executables is also more difficult.

The second way to use generators is by using gnode. The problem with this is that it's really slow. It basically transpiles all files with generators when require()ing. I tried this before, and it took about 15 seconds for my app to even start. This is unacceptable during development.

We're going to have to wait until node v0.14 or v1 to be able to use generators without any flags. Until then, you're going to be inconvenienced one way or another.

Documentation is sparse

Koa is pretty new, and TJ and I just don't have the time to write thorough documents. Some things are still subject to change, so we don't want to be too thorough or else we'd confuse people down the road. It's also radically different than other frameworks, so we'd have to explain both the philosophy as well as the technical details, otherwise developers are going to get lost.

There have been a few blog posts, but in my opinion they don't explain Koa well enough. The goal of this blog post is to explain more of the benefits instead of the philosophy or technical. If you want to know more about the philosophical, watch as I write my Koa talk.

yield next vs. yield* next January 9, 2014

One question a couple of people have asked is "what is the difference between yield next and yield* next? Why yield* next?" We intentionally do not use yield* next in examples to avoid new users from asking this question, but this question will inevitably be asked. Unfortunately, there isn't any very good explanations on these "delegating yields" as generators are relatively new. Although we Koa uses it internally for "free" performance, we don't advocate it to avoid confusion.

For specifications, view the harmony proposal.

What does delegating yield do?

Suppose you have two generators:

function* outer() {
  yield 'open'
  yield inner()
  yield 'close'
}

function* inner() {
  yield 'hello!'
}

If you iterate through outer(), what values will that yield?

var gen = outer()
gen.next() // -> 'open'
gen.next() // -> a generator
gen.next() // -> 'close'

But what if you yield* inner()?

var gen = outer()
gen.next() // -> 'open'
gen.next() // -> 'hello!'
gen.next() // -> 'close'

In fact, the following two functions are essentially equivalent:

function* outer() {
  yield 'open'
  yield* inner()
  yield 'close'
}

function* outer() {
  yield 'open'
  yield 'hello!'
  yield 'close'
}

Essentially, delegated generators replace the yield*!

What does this have to do with co or koa?

Generators are confusing already as it is. It doesn't help that koa's generators use co to handle control flow. Many people are and will be confused between native generator features and co features.

So suppose you have the following generators:

function* outer() {
  this.body = yield inner
}

function* inner() {
  yield setImmediate
  return 1
}

What is essentially happening here is:

function* outer() {
  this.body = yield co(function inner() {
    yield setImmediate
    return 1
  })
}

There's an extra co call here. But if we use delegation, we can skip the extra co call:

function* outer() {
  this.body = yield* inner()
}

Essentially becomes:

function* outer() {
  yield setImmediate
  this.body = 1
}

Each co call creates a few closures, so there's going to be a tiny bit of overhead. However, this isn't much overhead to worry about, but with one *, you can avoid this overhead and use native language features instead of this third party library called co.

How much faster is this?

Here's a link to a discussion we had a while ago about this topic: https://github.com/koajs/compose/issues/2. You won't see much performance difference (at least in our opinion), especially since your actual application code will slow down these benchmarks significantly. Thus, it isn't worth advocating it, but it's worth using it internally.

What's interesting is that with yield* next, Koa is currently faster than Express in these "silly benchmarks": https://gist.github.com/jonathanong/8065724. Koa doesn't use a dispatcher, unlike Express who uses multiple (one from connect, one for the router).

With delegating yield, Koa essentially "unwraps" this:

app.use(function* responseTime(next) {
  var start = Date.getTime()
  yield* next
  this.set('X-Response-Time', Date.getTime() - start)
})

app.use(function* poweredBy(next) {
  this.set('X-Powered-By', 'koa')
  yield* next
})

app.use(function* pageNotFound(next) {
  yield* next
  if (!this.status) {
    this.status = 404
    this.body = 'Page Not Found'
  }
})

app.use(function* (next) {
  if (this.path === '/204')
    this.status = 204
})

Into this:

co(function* () {
  var start = Date.getTime()
  this.set('X-Powered-By', 'koa')
  if (this.path === '/204')
    this.status = 204
  if (!this.status) {
    this.status = 404
    this.body = 'Page Not Found'
  }
  this.set('X-Response-Time', Date.getTime() - start)
}).call(new Context(req, res))

Which is ideally how a web application should look if we weren't so lazy. The only overhead is the initiation of a single co instance and our own Context constructor that wraps node's req and res objects for convenience.

Using it for type checking

If you yield* something that isn't a generator, you'll get an error like the following:

TypeError: Object function noop(done) {
  done();
} has no method 'next'

This is because essentially anything with a next method is considered a generator!

For me, I like this because I, by default, assume that I'm yield* gen(). I've rewritten a lot of my code to use generators. If I see something that isn't written as a generator, I'll think to myself, "can I make this simpler by converting it to a generator?".

Of course, this may not be applicable to everyone. You may find other reasons you would want to type check.

Contexts

co calls all continuables or yieldables with the same context. This particulary becomes annoying when you yield a function that needs a different context. For example, constructors!

function Thing() {
  this.name = 'thing'
}

Thing.prototype.print = function (done) {
  var self = this
  setImmediate(function () {
    console.log(self.name)
  })
}

// in koa
app.use(function* () {
  var thing = new Thing()
  this.body = yield thing.print
})

What you'll find is that this.body is undefined! This is because co is essentially doing this:

app.use(function* () {
  var thing = new Thing()
  this.body = yield function (done) {
    thing.print.call(this, done)
  }
})

and thus, this refers to the Koa context, not thing.

This is where yield* comes in! When context is important, you should be doing one of two things:

yield* context.generatorFunction()
yield context.function.bind(context)

By using this strategy, you'll avoid 99% of generator-based context issues. So avoid doing yield context.method!

Salvation not through Jesus Christ March 20, 2013

Recently, I had a Facebook argument with stranger on a mutual friend's wall. He, as well as many Christians, believes that to attain salvation, you must believe in Jesus Christ, you must have faith in Him, and you must believe He is the Son of God. But Jesus Himself said otherwise.

“He that hath my commandments, and keepeth them, he it is that loveth me: and he that loveth me shall be loved by my Father, and I will love him, and will manifest Myself to him.” - John 14:21 (King James Version)

"Jesus answered, "If anyone loves Me, he will keep My word. My Father will love him, and We will come to him and make Our home with him." - John 14:23 (King James Version)

These two Bible versions, which quote Jesus Himself and not His disciples, explicitly define the following truths (assuming Jesus is the truth):

  • If you keep/obey His commandments, you love Jesus.
  • If you keep/obey His commandments, the Father and Jesus will love you and show Themselves to you.
  • If you love Him, you will keep His word/commandments/teachings.
  • If you love Him, the Father will love you.
  • If you love Him, you will share a home with the Father and Jesus.

What can we conclude? If you obey His commandments, you will attain salvation. He never implies that any type of faith or believe in Him is a requirement to attain salvation, and, if you are a Christian, only words Jesus say are relevant.

So the big question is, "What are His Commandments?"

"Jesus said unto him, Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind. This is the first and great commandment. And the second is like unto it, Thou shalt love thy neighbour as thyself. On these two commandments hang all the law and the prophets." - Matthew 22:37-40 (King James Version)

The first and greatest commandment is to love God, but by the first two verses I quoted, loving God is just following His commandments. So to love God is to follow His commandments, and to follow His commandments is to love God, which is circular.

Thus, the only law you have to follow is the Golden Rule. If you obey the Golden Rule with all your heart, all your soul, and all your mind, then you are obeying God's commandments with all your heart, all your soul, and all your mind, then you are loving God with all your heart, all your soul, and all your mind, and God will return His love to you, and you will attain salvation.

Therefore, by Christian law, many non-Christians will attain salvation because they live by the Golden Rule, the only law Jesus commanded us to obey. Inversely, by their own law, many Christians will not attain salvation because they do not obey the Golden Rule - they love discriminately. Similarly, any law that does not follow the Golden Rule is unjust, and any "prophet" who does not follow the Golden Rule is a false prophet.