Will JavaScript Eat the Monolithic CMS?

Software is eating the world, the web is eating software, and JavaScript rules the web. What does that mean for the future of the monolithic CMS?

We’ve all been hearing a lot about JavaScript eating the web, but what does that mean for traditional content management systems like Drupal and Wordpress? In many ways, it’s a fatuous claim to say that any particular language is “winning” or “eating” anything, but if a different approach to building websites becomes popular, it could affect the market share of traditional CMS platforms. This is an important topic for my company, Lullabot, as we do enterprise software projects using both Drupal, a popular CMS, and React, a popular JavaScript framework.

At our team retreat, one of Lullabot’s front-end devs, John Hannah, who publishes the JavaScript Report, referred to Drupal as a “legacy CMS.” I was struck by that. He said that, for many in the JavaScript community, PHP CMSes like Drupal, WordPress, and Joomla are seen this way. What does he mean? If these “monolithic” CMS platforms are legacy, what’s going to replace them? Furthermore, PHP, the language, powers 83% of total websites and plays a large part in platforms like Facebook. That’s not going to change quickly.

Still, JavaScript’s surging popularity is undeniable. In part, this is due to its ubiquity. It’s a law-of-increasing-returns, chicken-and-the-egg, kind of thing, but it’s real. JavaScript is the only programming language that literally runs everywhere. Every web browser on every device has similar support for JavaScript. Most cloud providers support JavaScript’s server-side incarnation Node.js, and its reach extends far beyond the browser into the internet of things, drones and robots. Node.js means JavaScript can be used for jobs that used to be the sole province of server-side languages. And isomorphism means it can beat the server-side languages at their own game. Unlike server-side applications written in PHP (or Ruby or Java), an isomorphic application is one whose code (in this case, JavaScript) can run on both the server and the client. By taking advantage of the computing power available from the user’s browser, an isomorphic application might make an initial HTTP request to the Node.js server, but from there asynchronously load resources for the rest of the site. On any subsequent request, the browser can respond to the user without having the server render any HTML, which offers a more responsive user experience. JavaScript also evolved in tandem with the DOM, so while any scripting language can be used to access nodes and objects that comprise the structure of a web page, JavaScript is the DOM’s native tongue.

Furthermore, the allure of one-stack-to-rule-them-all may attract enterprises interested in consolidating their IT infrastructure. As Hannah told me, “An organization can concentrate on just supporting, training and hiring JavaScript developers. Then, that team can do a lot of different projects for you. The same people who build the web applications can turn around and build your mobile apps, using a tool like React Native. For a large organization, that’s a huge advantage.”

Node.js Takes a Bite Out of the Back-end

Tempted to consolidate to a single stack, one of Lullabot’s largest digital publishing clients, a TV network, has begun to phase out Drupal in favor of microservices written in Node.js. They still need Drupal to maintain metadata and collections, but they’re talking about moving all of their DRM (digital rights management) content to a new microservice. This also provoked my curiosity. Is the rapidly changing Node.js ecosystem ready for the enterprise? Who is guaranteeing your stack stays secure? The Node foundation? Each maintainer? Drupal has a seasoned and dedicated security team that keeps core and contrib safe. Node.js is changing fast, and the small applications written in Node.js are changing faster than that. This is a rate of change typically anathema to enterprise software. And it’s a significant shift for the enterprise in other ways, as well. While Drupal sites are built with many small modules that together create a unique application, they all live within one set of code. The Node.js community approaches the same problem by building small applications that communicate with each other over a network (known as microservices). The preferred approach of most within the Node.js community is to build applications using a microservices architecture. Does that work in the context of a major enterprise publisher? Is it maintainable? Airbnb, Paypal, and Netflix have proven that it can be, but for many of our clients in the digital publishing industry, I wonder how its use within these technology companies pertains. Arguably, Amazon pioneered the modern decouple-all-the-things, service-oriented architecture, and, well, you, dear client, are not Amazon.

In this article, I’ll explore this question through examples and examine how JS technologies are changing the traditional CMS stack architecture within some of the client organizations we work with. I should disclose my own limitations as I tackle an ambitious topic. I’ve approached this exploration as a journalist and a business person, not as an engineer. I’ve tried to interview many sources and reflect the nuances of the technical distinctions they’ve made, but I do so as a technical layperson, so any inaccuracies are my own.

The Cathedral and the Bazaar

How is the JavaScript approach different than that of a CMS like Drupal? In the blogosphere, this emerging competition for the heart of the web is sometimes referred to as “monolith vs. microservice.” In many ways, it’s less a competition between languages, such as JavaScript and PHP (or Ruby or Python or Java), but more the latest chapter in the dialectic between small, encapsulated programs with abstracted APIs as compared to comprehensive, monolithic systems that try to be all things to all people.

The JavaScript microservices approach—composing a project out of npm packages—and the comparatively staid monolithic approach taken by Drupal are both descendants of open source collaboration, aka the “bazaar,” but the way things are being done in Node.js right now, as opposed to the more mature and orderly approach of the Drupal community, make Node.js the apparent successor of the bazaar. Whereas Drupal—with 411,473 lines of code in the current version of core—has cleaned up its tents and begun to look more like a cathedral, with core commits restricted to an elite priesthood.

Drupal still benefits from a thriving open-source community. According to Drupal.org, Drupal has more than 111,000 users actively contributing. Meaning, in perhaps the most essential way, it is still benefiting from Linux founder Linus Torvald’s law, “Given enough eyes, all bugs are shallow.” Moreover, the Drupal community remains the envy of free software movement, according to Google’s Steve Francia, who also guided the Docker and MongoDB communities following Drupal’s lead.

Nevertheless, JavaScript and its server-side incarnation are gaining market share for a reason. JavaScript is both accessible and approachable. Lullabot senior developer Mateu Aguiló Bosch, one of the authors of Contenta CMS, a decoupled Drupal distribution, describes the JavaScript ecosystem, as follows: “there’s an open-script vibe in the community with snippets of code available everywhere. Anyone, anywhere, can experiment with these snippets using the built-in console in their browser.” Also, Bosch continues, “Node.js brings you closer to the HTTP layer. Many languages like PHP and Ruby abstract a lot of what’s going on at that layer, so it wasn’t until I worked with Node.js that I fully understood all of the intricacies of the HTTP protocol.”

Furthermore, thanks to the transpiler Babel, JavaScript allows developers to program according to their style. For instance, a developer familiar with a more traditional nomenclature for object-oriented classes can use TypeScript and then transpile their way to working JavaScript that’s compatible with current browsers. “Proposals for PHP have to go through a rigorous process, then go to binary, then the server has to upgrade, and then apps for Drupal have to support this new version,” says Sally Young, a Lullabot who heads the Drupal JavaScript Modernization Initiative. “With transpiling in JS, we can try out experimental language features right away, making it more exciting to work in.” (There is precedence for this in the PHP community, but it’s never become a standard workflow.)

However, JavaScript’s popularity is attributable to more than just the delight it inspires among developers.

Concurrency and non-blocking IO

I spoke with some of our clients who are gradually increasing the amount of JS in their stack and reducing the role of Drupal about why they’ve chosen to do so. (I only found one who is seeking to eliminate it all together, and they haven’t been able to do so yet.) Surprisingly, it had nothing to do with Hannah’s observation that hiring developers for and maintaining a single stack would pay dividends in a large organization. This was a secondary benefit, not a primary motivation. The short answer in each case was speed, in one case speed in the request-response sense, and in the other, speed in the go-to-market sense.

Let’s look at the first example. We work with a large entertainment media company that provides digital services for a major sports league. Their primary site and responsive mobile experience are driven by Drupal 8’s presentation layer (though there’s also the usual caching and CDN magic to make it fast). Beyond the website, this publisher needs to feed data to 17 different app experiences including things like iOS, tvOS, various Android devices, Chromecast, Roku, Samsung TV, etc. Using a homegrown system (JSON API module wasn’t finished when this site was migrated to D8), this client pushes all of their content into an Elasticsearch datastore where it is indexed and available for the downstream app consumers. They built a Node.js-based API to provide the middleware between these consumers and Elasticsearch. According to a stakeholder, the group achieved “single-digit millisecond responses to any API call, making it the easiest thing in the whole stack to scale.”

This is likely in part due to one of the chief virtues of Node.js. According to Node’s about page:

As an asynchronous event-driven JavaScript runtime, Node is designed to build scalable network applications…many connections can be handled concurrently. Upon each connection, a callback is fired, but if there is no work to be done, Node will sleep. This is in contrast to today's more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use.

Multiple core CPUs can handle multiple threads in direct proportion to the number of CPU cores. The OS manages these threads and can switch between them as it sees fit. Whereas PHP moves through a set of instructions top to bottom with a single pointer for where it’s at in those instructions, Node.js uses a more complex program counter that allows it to have multiple counters at a time. To roughly characterize this difference between the Node.js asynchronous event loop and the approach taken by other languages, let me offer a metaphor. Pretend for a moment that threads are cooks in the kitchen awaiting instructions. Our PHP chef needs to proceed through the recipe a step at a time: chopping vegetables, and then boiling water, and then putting on a frying pan to sauté those veggies. While the water is boiling, our PHP chef waits, or the OS makes a decision and moves on to something else. Our Node.js chef, on the other hand, can handle multi-tasking to a degree, starting the water to boil, leaving a pointer there, and then moving on to the next thing.

However, Node.js can only do this for input and output, like reading a database or fetching data over HTTP. This is what is referred to as “non-blocking IO.” And, it’s why the Node.js community can say things like, “projects that need big concurrency will choose Node (and put up with its warts) because it’s the best way to get their project done.” Asynchronous, event-driven programs are tricky. The problem is that parallelism is a hard problem in computer science and it can have unexpected results. Imagine our cook accidentally putting the onion on the stove to fry before the pan is there or the burner is lit. This is akin to trying to read the results from a database before those results are available. Other languages can do this too, but Node’s real innovation is in taking one of the easier to solve problems of concurrent programming (IO), designing it directly into the system so it’s easier to use by default, and marketing those benefits to developers who may not have been familiar with similar solutions in more heavyweight languages like Java.

Even though Node.js does well as a listener for web requests, the non-blocking IO bogs down if you’re performing CPU intensive computation. You wouldn’t use Node.js “to build a Fibonacci computation server in Node.js. In general, any CPU intensive operation annuls all the throughput benefits Node offers with its event-driven, non-blocking I/O model because any incoming requests will be blocked while the thread is occupied with your number-crunching,” writes Tomislav Capan in “Why the Hell Would You Use Node.js.” And Node.js is inherently single-threaded. If you run a Node.js application on a CPU with 8 cores, Node.js would just use one, whereas other server-side languages could make use of all of them. So Node.js is designed for lots of little concurrent tasks, like real-time updates to requests or user interactions, but bad at computationally intensive ones. As one might expect, given that, it’s not great for image processing, for instance. But it’s great for making seemingly real-time, responsive user interfaces.

JavaScript Eats the Presentation Layer

There’s an allure to building a front-end with just HTML, CSS, and JavaScript since those three elements are required anyway. In fact, one of our largest clients started moving to a microservices architecture somewhat by accident. They had a very tight front-end deadline for a new experience for one of their key shows. Given the timeline, the team decided to build a front-end experience with HTML, CSS, and JavaScript and then feed the data in via API from Drupal. They saw advantages in breaking away from the tricky business of Drupal releases, which can include downtime as various update hooks run. Creating an early decoupled (sometimes referred to as “headless”) Drupal site led them to appreciate Node.js and the power of isomorphism. As the Node.js documentation explains, “after over 20 years of stateless-web based on the stateless request-response paradigm, we finally have web applications with real-time, two-way connections, where both the client and server can initiate communication, allowing them to exchange data freely. This is in stark contrast to the typical web response paradigm, where the client always initiates communication.”

After hacking together that first site, they found React and began to take full advantage of stateful components that provide real-time, interactive UX, like AJAX but better. Data refreshes instantly, and that refresh can be caused by a user’s actions on the front-end or initiated by the server. To hear this client tell it, discovering these technologies and the virtues of Node.js led them to change the role of Drupal. “With Drupal, we were fighting scale in terms of usage, and that led us to commit to a new, microservices-oriented stack with Drupal playing a more limited role in a much larger data pipeline that utilizes a number of smaller networked programs.” These included a NoSQL data-as-a-service provider called MarkLogic, and a search service called Algolia, among others.

So what started as the need for speed-to-market caused this particular client to discover the virtues of Node.js, and then, subsequently React. Not only can libraries like React provide more app-like experiences in a browser, but tools like React Native can be used to make native apps for iOS and Android. PWAs (progressive web apps) use JavaScript to make web applications behave in certain respects like native applications when they’re opened on a mobile device or in a standard web browser. If there was ever a battle to be won in making the web more app-like, JavaScript won that contest a long time ago in the days of jQuery and Ajax. Heck, it won that battle when we all needed to learn JavaScript in the late 90s to swap navigation images using onMouseOver.

Is JavaScript taking over the presentation layer? Only for some. If you’re a small to medium-sized business, you should probably use the presentation layer provided by your CMS. If you’re a large enterprise, it depends on your use case. A good reason to decouple might be to make your APIs a first-class citizen because you have so many downstream consumers that you need to force yourself to think about your CMS as part of a data pipeline and not as a website. Moreover, if shaving milliseconds off “time to first interactive” means earning millions of dollars in mobile conversions that might otherwise have been lost, you may want to consider a JS framework or something like AMP to maximize control of your markup. Google has even built a calculator to estimate the financial impact of better load times based on the value of a conversion.

That said, there are some real disadvantages in moving away from Drupal’s presentation layer. (Not to mention it’s possible to make extensive use of JavaScript-driven interactivity within a Drupal theme.) By decoupling Drupal, you lose many of Drupal’s out-of-the-box solutions to hard problems such as request-response handling (an essential component of performance), routing, administrative layout tools, authentication, image styles and a preview system. Furthermore, you now have to write schemas for all of your APIs to document them to consumers, and you need to grapple with how to capture presentational semantics like visual hierarchy in textual data.

As Acquia CTO Dries Buytaert wrote in The Future of Decoupled Drupal,

Before decoupling, you need to ask yourself if you're ready to do without functionality usually provided for free by the CMS, such as layout and display management, content previews, user interface (UI) localization, form display, accessibility, authentication, crucial security features such as XSS (cross-site scripting) and CSRF (cross-site request forgery) protection, and last but not least, performance. Many of these have to be rewritten from scratch, or can't be implemented at all, on the client-side. For many projects, building a decoupled application or site on top of a CMS will result in a crippling loss of critical functionality or skyrocketing costs to rebuild missing features.

These things all have to be reinvented in a decoupled site. This is prohibitively expensive for small to medium-sized businesses, but for large enterprises with the resources and a predilection for lean, specific architectures, it’s a reasonable trade-off to harness the power of something like the React library fully.

JavaScript frameworks will continue to grow as consumers demand more app-like experiences on the web and that probably means the percentage of websites directly using a CMS’s presentation layer will shrink over time, but this is going to be a long-tail journey.

JavaScript Eats the Admin Interface

PHP CMSes have a decade lead in producing robust editorial experiences for managing content with a refined GUI. Both WordPress and Drupal have invested thousands of hours in user testing and refinement of their respective user interfaces. Well, wait, you say, aren’t both Drupal and WordPress trying to replace their editorial interfaces with decoupled JavaScript applications to achieve a more app-like experience? Well, yes. Moreover, Gutenberg, the new admin interface in WordPress built on React, is an astonishing evolution for the content authorship experience, a consummation devoutly to be wished. Typically, editors generate content in a third-party application before moving it over to and managing it in a CMS. Gutenberg attempts to create an authorship experience to rival that of Desktop applications typically used for this purpose. At Word Camp 2015, Matt Mullenweg issued a koan-like edict to the WordPress community “learn JavaScript, deeply.” He was preparing the way for Gutenberg.

Meanwhile, on Drupal island, the admin-ui-js team is at work building a decoupled admin interface for Drupal with React, code-named the JavaScript Modernization Initiative. In that sense, JavaScript is influencing the CMS world by beginning to eat the admin interface for two major PHP CMSes. As of this writing, neither interface was part of the current core release.

JavaScript Replaces the Monolithic CMS?

Okay, great, developers love JavaScript, JavaScript devs are easier to hire (perhaps), Node.js can handle concurrency in a more straightforward fashion than some other languages, isomorphic decoupled front-ends can be fast and provide interactivity, and the Drupal and Wordpress admin UIs are being rewritten in React, a JavaScript library, in order to make them more app-like. But our original question was whether JavaScript might eventually eat the Monolithic CMS. Looking at the evidence I’ve produced so far, I think you’d have to argue that this process has begun. Perhaps a better question is what do we, the Drupal community, do about it?

A sophisticated front-end such as a single-page application, a PWA, or a React application, let’s say, still needs a data source to feed it content. And while it’s possible to make use of different services to furnish this data pipeline, editors still need a place to edit content, govern content, and manage meta-data and the relationship between different pieces of content; it’s a task to which the PHP monolithic CMS platforms are uniquely suited.

While some JavaScript CMSes have cropped up—Contentful (an API-first CMS-as-a-service platform), CosmicJS, Prismic, and ApostropheCMS— they don’t have near the feature-set or flexibility of a Drupal when it comes to managing content. As head of Drupal’s JavaScript initiative, Sally Young says, “the new JS CMSes do less than Drupal tries to do, which isn’t necessarily a bad thing, but I still think Drupal’s Field API is the best content modeling tool of any CMS by far.” And it’s more than the fact that these CMSes try to do less, it’s also an issue of maturity.

“I’m not convinced from my explorations of the JS ecosystem that NPM packages and Node.js are mature enough to build something to compete with Drupal,” says senior architect Andrew Berry. “Drupal is still relevant because it’s predicated on libraries that have 5-10 years of development, whereas in the Node world everything is thrown out every 6 months. In Drupal, we can’t always get clients to do major releases, can you imagine if we had to throw it out or change it every 6 months?” This was echoed by other experts that I spoke with.

Conclusion

The monolithic web platforms can’t rest on their laurels and continue to try to be everything to everyone. To adapt, traditional PHP CMSes are going to require strong and sensible leadership that find the best places for each of these tools to shine in conjunction with a stack that includes ever-increasing roles for JavaScript. Drupal, in particular, given its enterprise bent, should embrace its strengths as a content modeling tool in an API-first world—a world where the presentation layer is a separate concern. As Drupal’s API-First initiative lead, Bosch said at Drupalcon Nashville, “we must get into the mindset that we are truly API-first and not just API compatible.” Directing the formidable energy of the community toward this end will help us remain relevant as these changes transpire.

To get involved with the admin-ui-js initiative, start here.

To get involved with the API-first initiative, start here.

Special thanks to Lullabots Andrew Berry, Ben Chavet, John Hannah, Mateu Aguiló Bosch, Mike Herchel, and Sally Young for helping me take on a topic that was beyond my technical comfort zone.

Get in touch with us

Tell us about your project or drop us a line. We'd love to hear from you!