- To circumvent the fact that a developer may not always have access to the internet and hence the CDN, some web pages include a second script that check for the absence of the global object and then include it from the local filesystem. In my humble opinion, this method is a little ugly and adds unnecessary extra logic.
- Another common alternative is to include the final paths of the when the pages are built using ant or make. Web development is usually about making a change and hitting F5, building every time to see a change is hard.
- The most common approach I have seen is to alter the /etc/hosts file (or a dedicated proxy server in the middle) that intercepts requests to redirect them to the local file.
I noticed that I usually have Tamper Data or Fiddler open when I do web development to inspect traffic to my server. Instead of modifying the hosts file or spinning up a special proxy server, it is easier to add rules to Fiddler that redirect CDN requests to a local file.
All that we need to do is start fiddler and filter all requests to internet resources and then save the session to import it as auto-responder.
An easier approach that would work for all applications would be to save the entire CDN archive as a .saz file. Unfortunately, I could not find the entire list of Google or Microsoft CDN files.
I tried setting up a crawler that picks up all versions and files/sub-files for all libraries from Google, but then, I just worry about just 2 or 3 libraries. Hence, it is easier to set up mapping rules just for them.
I am planning to write a Fiddler or a Firefox plugin that automatically looks at CDN requests, caches them, and makes them available in case of absence of connectivity.