Thoughts on JavaScript

By Jeffrey Charles
Published on

Over the past year I’ve been working with Node.js, Angular, React, along with some other technologies. This is a post about some of my observations around JavaScript on the back-end and front-end.


Here are some benefits of using Node.js:

  • mostly cross-platform
  • JavaScript is mostly straightforward if you avoid bad parts of the language
  • large open source community with many packages on npm
  • minimal boilerplate code required
  • better performance given minimal computational requirements due to event-driven architecture

Here are some disadvantages:

  • co-operative single threaded nature means computationally intensive web requests degrade all other in-flight web requests’ performance
  • programming errors like dereferencing a null variable’s property will bring the entire process down
  • process management is not sufficiently abstracted such that many tools will work on POSIX systems but have subtle errors when run on Windows

Many of these disadvantages are not as bad as they first appear. The cascading request timeouts can be avoided by delegating computationally intensive work to either another service (one that uses a preemptive multitasking architecture) or to a queue worker. Preemptive multitasking systems also suffer from this problem given a sufficient number of requests to something computationally expensive. An example of this is triggering a regular expression denial of service on a web service using regex validation of an incoming string a sufficient number of times to occupy all threads in the server’s compute threadpool. Node does exacerbate the problem though in that only a single request is necessary as opposed to several. The process coming down on programmer error can be worked around by having another process monitor the node process and restart it if it terminates. Having good automated testing, centralized logging, and effectively minimizing your mean time to recovery is essential for discovering problems and fixing them quickly.

One process management bug that I’ve seen is the standard error stream being lost on Windows when using exec and binding on the stderr event whereas using spawn and setting stdio to inherit seems to work more reliably. Another problem one I’ve noticed is when using forever on Windows, child processes are not terminated when it’s terminated. At the moment, the best solution seems to be opening Github issues when problems like these are spotted.

These tradeoffs make Node an appropriate technology choice in a limited number of scenarios. Build systems, particularly around JavaScript, are one area where it shines. If a library that better abstracts the differences between Windows and POSIX processes were created and used, that would reduce at least some of my current frustrations. Some micro-services where there is light computational work going on (e.g., CRUD, delegation, or service composition) are another area where Node is a decent choice. It’s a nice middle-ground between Nginx and something like Sinatra. The event-driven nature should provide better performance than a number of alternatives and the limited amount of code that would need to be written should reduce development time and maintenance effort. The lightweight nature of these sorts of services means there’s less chance of an expensive request causing cascading failures or of a programming mistake taking the server process down.

Choosing an approach for supervising the Node process to make sure it stays up has a few factors to consider. If you’re running the application outside of a container, then it’s probably worth using a systemd unit file for this. You should also look into using Nginx as a reverse proxy to avoid running your service as root and get a decent error page if your service stays down. If you’re running in AWS, using an auto scaling group with a simple healthcheck route that immediately sends a response can work. Just bear in mind that something running on the instance can restart Node much faster than auto scaling can terminate and launch a new instance. If you’re running in a container (e.g., using Docker) without an init system, you should consider forever or supervisor, as they’re pretty light-weight compared to pulling in a full init system.

Front-end JavaScript

There are a number of competing approaches to front-end development. I initially decided to use Angular because of the size of the community and the impression that I could limit which parts of the front-end would use Angular which would allow other parts to use something else. For my toy project, I opted to switch Angular out for React and Reflux because of the complexities of Angular’s directives API.

I found working with Angular pretty straight-forward for simple pages. The two-way databinding works pretty well with the built-in directives and you don’t really need to concern yourself too much with dependency injection if you aren’t authoring a lot of components. Once you get past simple use cases, Angular becomes quite a bit more painful to use. One problem is that concepts in it are not named at all well, for example, it has factories, services, and providers where the main difference between these concepts is how they get dependency injected. I think it would’ve been more straightforward to have a single concept that would represent an injectable object and then have a configuration option adjacent to the object definition for whether to treat the object as a singleton, create a new instance each time, or call a specified method for an instance. Another pain point I had was authoring directives. Authoriing directives is complicated by there being a lot of concepts that you have to be aware of. For example, directives have scopes which can have one of three different values where the values are one of @, &, or =. Knowing when each is appropriate to use is tricky. In particular, the = scope can result in really leaky directives where it can be unclear whether the directive or the thing using the directive owns what’s been passed in. ui-router is another piece of Angular that is just really difficult to grok and has all sorts of gotchas to be aware of. The summary of all of this is that I don’t like Angular.

I’ve been really enjoying working with React. I find its APIs far simpler than Angular directives and you get a similar amount of power through them. React on its own doesn’t provide enough to really write a decent application though. Some sort of Flux or Flux-like architecture is usually recommended with good reason. I opted to try Reflux as my Flux library and have a neutral opinion of it. It eliminates quite a bit of boiler-plate code with Flux. Unfortunately, unlike Flux, it does not allow you control the order that your stores’ action listeners are executed in. The implications of this are that you cannot define stores which are dependent on each other. You also cannot reliably perform state clean-up in the same set of action handlers that are using the state you want to clean-up (e.g., given a text input value where one store is responsible for managing its lifetime and another needs to read from it, you cannot reliably have the store reading from it run before the other store cleans it up). I don’t know if it’s practical to enforce this sort of control flow in user interface code with state which is inherently mutable. At least, not without having stores swell in size to accomodate everything that needs to read that state and perform the clean-up.

I think Flux-like architectures work well. It would be nice to have one that eliminates boiler-plate code and can support circularly dependent stores. The is an area where there is a lot of library experimentation so something may materialize soon.