I completely agree with the fact that “premature optimization is the root of all evil (or at least most of it) in programming”. But it makes no harm sometimes to know the bits of your code that you write and how it affects the environment for which you are writing your code.
This post is more about learning some of these bits of JavaScript, especially in Node.js environment with v8 core not to favour this micro-optimization because business code is still more concerned about readability and maintainability.
So what exactly we are looking at?. For the legibility of the remaining article, we are comparing the ES6 way of destructing assignment and traditional way of passing parameters inside a function, from a perspective of its impact on performance CPU/Memory.
Dissecting the performance from the V8 bytecode.
Let’s look at the byte code of function parameters of the below sample code.
In the case of ES6’s Function Destructuring Assignment we can see that the byteCode has increased significantly from 4 to 19 where most of the them are JumpIfUndefined, CallRuntime, Throw instructions.
Dissecting the performance w.r.t the memory.
Generating too many function call to trace the GC.
function parameters
1 2 3 4 5 6 7 8
functionadd(a,b) { return a + b }
for (let i = 0; i < 1e8; i++){ const d = add(1,2) } console.log(%GetHeapUsage())
But does it means in ES6’s Function Destructuring Assignment we have more memory allocation, NO as here most of the memory difference is due to the object creation overhead in each loop.
Because if we try to test the ES6’s Function Destructuring Assignment with constant object outside the loop we would’t get much memory difference, from the function parameters.
1 2 3 4
const ob = { a: 1, b: 2 }; for (let i = 0; i < 1e8; i++) { const d = add(ob); }
So we have almost the same memory utilization as the functional parameter case.
So in the case of ES6’s Function Destructuring Assignment the memory utilization difference is directly dependent upon how we are creating the object.
Also if we see verbose GC detailed information in the case of const d = add({a:1,b:2}), we will see that most of these allocation happens in new space region, so the cost of these clean-up is still relatively small (<1 ms), but this may impact the performance of the application (depending upon the application use case) because the Garbage Collector in V8 is generational, stop-the-world garbage collector.
Though we have a evident Computational penalty of using ES6’s Function Destructuring Assignment, the memory penalty depends upon the use case.
At first glance, we might be tempted to do premature optimization but for business we should prefer the readability, and also in this case for the code sample above, we have seen the results, still there is a very high probability that final benchmark will differ from project to project in real world because V8 performs lots of optimization with different mechanism eg. escape analysis on destructuring, and we might never have to worry about this in most of the cases.
Learned something? Share 👏 to help others find this article.
Be the first person to leave a comment!