jsDelivr API - From a Casual Experiment to Success

By Juho Vepsäläinen

The following is a guest post by Juho Vepsäläinen, the guy behind JSwiki and co-founder of jster.net. He loves art, startups, cycling and of course programming!

I've been working with Dmitriy of jsDelivr for roughly a year now. jsDelivr is one of the leading JavaScript CDNs out there. It utilizes multiple CDN providers and VPSs and then performs load balancing so that the assets get served in a swift manner. Recent post at Mozilla Hacks covers the technology in good detail.

Early on we integrated jsDelivr as a part of JSter, a library catalog I help to develop and maintain. CDNperf was our first bigger project together. The idea was to provide visibility to JavaScript CDN performance. That's it.

Our other projects include osscdn, an alternative frontend for jsDelivr, and of course the subject of this post, jsDelivr API.

Why was the API developed?

Originally jsDelivr API was just a JSON file Dmitriy had put together. Let's just say it's not very easy to use and you have to do a lot of work yourself to work with the data.

Given I had done some work on the domain already I thought I could write something much better in an hour or two. As a result a new API was born. This happened three months ago.

How does it work?

We host our service at AppFog. The API runs on multiple instances load balanced by their infrastructure. In front we use MaxCDN for caching.

Currently our API supports three popular provides. Besides jsDelivr we provide uniform access to cdnjs, Google CDN, jQuery CDN and BootstrapCDN. The API relies on scraping. The data is then stored on an in-memory database.

The scrapers are somewhat simple. In case of jsDelivr and cdnjs we simply fetch JSON and then format it a little bit. Google took more effort. In that case we rely on their HTML index and extract the data from there.

Each of our instances maintains its own state. This makes the API very scaleable. We can adjust the amount of API instances on whim. Maybe this isn't an issue yet but it doesn't hurt.

There is some overhead as each instance will scrape on its own. It would be possible to overcome the scraping problem by implementing a separate service that the instances query. Given we have only a few instances running I haven't bothered to do this yet.

What does the API look like?

The API relies on technology I have designed and implemented to make it easy to write light, RESTful APIs. rest-sugar is the most important part. To give you some idea of the API consider the examples below:

GET http://api.jsdelivr.com/v1/jsdelivr/libraries - All jsdelivr libraries
GET http://api.jsdelivr.com/v1/jsdelivr/libraries?name=AngularJS - AngularJS
GET http://api.jsdelivr.com/v1/google/libraries/AngularJS - Alias, getting from Google this time
GET http://api.jsdelivr.com/v1/jsdelivr/libraries?name=jquery* - All libraries starting with jquery (based on [minimatch](https://www.npmjs.org/package/minimatch))
GET http://api.jsdelivr.com/v1/jsdelivr/libraries?author=angularui - All libraries of angularui
GET http://api.jsdelivr.com/v1/jsdelivr/libraries?name=jquery&fields=mainfile,name - Only mainfile and name fields of jquery

In addition the API provides pagination and a couple of extra custom features have been covered at the project README.

How is the API used currently?

Even though the API started out from personal curiosity, people started finding it useful. To give you some examples of usage so far, consider the projects below:

The API is still quite young and maybe not that many tool authors are aware of it yet. At the moment I'm planning its next revision that will unify the API further and simplify certain aspects. Your feedback is welcome as it will affect the design.


jsDelivr API is more than you might infer from its name. It provides access to the data of multiple CDNs. This is useful for tool authors and service providers. It's also an example of an API relying on scraping.

See More Posts About:


Posted by Juho Vepsäläinen

LinkedIn Website