-
Notifications
You must be signed in to change notification settings - Fork 79
Speed up page loads by over 10x, reduce JavaScript bundle transfers by over 18x #7889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
fc5f437
to
f135616
Compare
643ec29
to
53d9081
Compare
}), | ||
new CompressionPlugin({ | ||
algorithm: 'brotliCompress', | ||
test: /\.(js|css|html|svg|png)$/, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We added gzip compression on nginx side here:
When this is merged, which content type(s) should we remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can replace it to this.
gzip_types text/plain text/css text/html text/xml application/json application/xml application/xml+rss image/vnd.microsoft.icon image/x-icon;
And add gzip_static on;
.
This means we gzip anything that webpack doesn't already. But we only serve things webpack build. So we probably won't need on-demand gzipping (oops!). Though, compressing ahead of time is probably better for performance.
Honestly, I didn't confirm if the files listed in test
are actually compressed in the final build 😂 Just because test
says so doesn't mean they are compressed because webpack only deals with files that webpack can pick up, i.e., imported. Hence why I left some types in gzip_types
just in case webpack didn't compress them because I haven't checked. I just removed types that I know webpack will compress, i.e., JavaScript files and SVGs.
Either you can help check (sorry for troubling you!) or I can do it once I return from leave next week.
Alternatively, we could update the NGINX config to what I shared above. When I return, I'll check, then update either the NGINX or webpack config as needed in separate PRs. Then we can test & enjoy this 10x improvement ASAP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we remove on-demand gzipping from our NGINX config. All files that matter are already compressed by webpack. SVGs and PNGs aren't compressed probably because they were picked up at a different build phase. Some files weren't compressed because they are already small enough, and that compressing will end up creating a bigger file.
So, just have this.
gzip_static on;
brotli_static on;
I don’t think it's worth it to have on-demand gzipping because it still puts load on NGINX. Ideally, we should be throwing these assets to S3 and have NGINX only proxy these requests to S3, so it doesn't incur any memory or disk bandwidths. The only concern is uncompressed SVGs and PNGs, but these are already optimised by ImageMinimizerPlugin
.
This also reduces our reliance on NGINX, making it easy for us to move to CloudFront (if we ever decide to) and letting NGINX only focus on balancing Rails server loads.
If we really need to have on-demand gzipping for some reasons, I suggest we keep this gzip_types
.
gzip_types text/plain text/css text/xml application/json application/xml application/xml+rss image/svg+xml image/vnd.microsoft.icon image/x-icon;
This effectively only compresses SVGs. We don't have the other files.
@@ -1,6 +1,5 @@ | |||
import { DependencyList, useCallback, useEffect } from 'react'; | |||
import type { DebouncedFunc } from 'lodash'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it not worth to require the individual functions (e.g. lodash.debounce
), or is that taken care of by lodash-es
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not aware of doing lodash.debounce
or such syntax for other functions.
Are you asking if we could just not replace with lodash-es and instead just changing the imports to absolute ones with lodash.debounce
?
From what I've seen, it seems like lodash and lodash-es only differ in their imports. When I installed lodash after lodash-es, yarn.lock
doesn't change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I remember. A differentiating reason on why I ended up with lodash-es
is because its package.json
has sideEffects: false
that webpack can use for tree-shaking. lodash
doesn't have this key.
@@ -82,7 +82,7 @@ class VideoPlayer extends Component { | |||
UNSAFE_componentWillMount() { | |||
if (VideoPlayer.ReactPlayer !== undefined) return; // Already loaded | |||
|
|||
import(/* webpackChunkName: "video" */ 'react-player').then( | |||
import(/* webpackChunkName: "video" */ 'react-player/youtube').then( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would happen to this component if the URL isn't a youtube URL?
As of writing this comment, all of our URLs saved in production point to youtube, but in theory we have a whitelist of supported video sites to link to
https://github.com/Coursemology/coursemology2/blob/master/app/helpers/application_html_formatters_helper.rb#L86-L96
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the URL isn't a YouTube one, then it will just render an empty frame. I decided to do this despite knowing our back-end whitelists other video providers because the New Video form's placeholder explicitly says "YouTube".
In fact, we probably should tighten the whitelist on our back-end because with that placeholder, our business logic really is supporting YouTube only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the whitelist for other video providers and skipped their tests in 7941198 to synchronise our client & server codes with business requirements. We can always add them back once we decide to have them as part of our business requirements.
We have been, and currently is supporting only YouTube videos.
d62f00c
to
e10c918
Compare
We do this because from the business logic point-of-view, Coursemology only supports YouTube. The New Video form's placeholder specifies that, and the production database only has YouTube URLs.
e10c918
to
7941198
Compare
Coursemology's JavaScript bundle sizes have been very huge and it contributes to slow loading times, especially in high-latency environments, such as during exams with many concurrent requests to our NGINX server, where our assets are currently hosted on. Coursemology's rendering strategy is client-side rendering (CSR), so JavaScript has to be fetched and rehydrated before it can make any data requests to our Rails server. Therefore, slow JavaScript loads will exacerbate overall page loading times.
Note
TL;DR: See Benchmarks below for results.
Current bundle outlook
These are the chunks always required on almost all page loads, sorted by size. There are also other chunks, but they aren't as important and are excluded here for brevity.
This is the results of the current bundle analysis.
Optimisation strategies
CKEditor
CKEditor contributes the largest (1.26 MB) and it's bundled twice. It's probably bundled into
UnauthenticatedApp
even when there are no pages that renders a rich text editor because webpack doesn't know if its import has side effects, thus can't tree-shake it.The custom build has been optimised upstream and while I'm at it, I upgraded it from v40 to v45. Essentially, I directly import the distributed bundles and CSS of the extensions directly so nothing unused is bundled. This reduced the CKEditor chunk size from 1.26 MB to 1.13 MB.
Most importantly, CKEditor is now lazy-loaded so only pages that need it will load it on demand. A skeleton will appear when the bundle loads, but I think this is an acceptable trade-off.
Route-based code-splitting
This is the root cause of our enormous loading times. Our bundle grows proportionally to the features we develop; there's no way around this. However, what we should be optimising isn't just reducing bundle sizes, rather, reducing bundle sizes on each user journey. There's no reason why a user who visits the Announcements page needs to wait 5 seconds for CKEditor or Ace chunks to also be fetched.
This PR implements lazy loading the components for all front-end routes we have in Coursemology. This is known as route-based code-splitting. This means that our bundle will be sliced into smaller chunks and when a user visits a page, they fetch only what they need. This naturally will increase our total built bundle size (in fact, this PR increased it from 12.68 MB to 13.79 MB), but this doesn't matter; what does is the amount of JavaScript transferred on each route. See Benchmarks below.
Route-based code-splitting also ensures that if we do have pages/components that are unoptimised, written poorly, or just inevitably large, other pages aren't "penalised" for this.
This results in pages having entry point chunks ranging from 3 kB to 145 kB, which is still an acceptable bundle size, compared to having to load the entire 3.35 MB
AuthenticatedApp
.Locale splitting
The current
yarn build:translations
which usesscript/aggregate-translations.js
pools the translations for all locales into one hugelocales.json
(364.34 kB) which is then bundled with our app. This PR ensures that only the locale for the set language is fetched at any time.Important
Another huge contributor is how we use fixed
id
s for ourDescriptor
translation messages. This is unnecessary, not recommended, error-prone, hard to maintain, and contributes to huge chunk sizes because the wholeid
string is parsed. Ideally, we should just never setid
s and letbabel-plugin-formatjs
generateid
s by hashing theDescriptor
. I don't think this is the time to undo existing codes, but from now on, please stop settingid
s.Lazy-loading assessment questions' answer components
SubmissionEditIndex
atcourses/:courseId/assessments/:assessmentId/submissions/:submissionId/edit
now lazy-loads the answer components for different question types. This way, students who does only programming questions need not load the chunks for other questions' answers. This reduced theSubmissionEditIndex
entry point chunk from 261.54 kB to 141.24 kB.jQuery and jQuery UI
We're only using jQuery and jQuery UI (122.29 kB combined) for the reordering of achievements in the table in
AchievementReordering
. This PR removes the implicit global availability of jQuery and ensures both libraries are loaded on demand only when a user decides to reorder the achievements.Ace
While Ace's mode scripts aren't huge, we are loading them all over the place. This PR ensures that modes are loaded on demand when we need them, e.g., only load
ace-builds/src-noconflict/mode-r
when we are loading R programming answers.Stylesheets minification
This PR minimises our restricted CSS and SCSS imports with cssnano via PostCSS before they are imported and bundled to their respective JavaScript chunks. I specifically disabled CSS source maps since they aren't necessary and reduced bundle sizes by about ~300 kB.
Other optimisations
lodash
relative imports withlodash-es
absolute imports (reduced ~68 kB)@import
(reduced ~50 kB)intl
library (reduced ~92 kB)Important
I also removed PropTypes from production builds, but this isn't perfect since some still leaks through. We need to eventually remove it since they are removed and ignored in React 19 and has been deprecated since April 2017.
Compression
The final touch is adding gzip and Brotli compressions which brings down the total bundle size to 3.55 MB. The NGINX server needs to be updated to serve gzip and Brotli bundles with
gzip_static on;
andbrotli_static on;
.Caution
Generally, never think that compression is an acceptable answer to large bundle sizes, especially if they are bundles that we write and can control. For chunks that are just inevitably large, e.g., CKEditor, Ace, react-dom, it's okay to rely on compression. But generally, always think about how your codes will be bundled as you're writing them and/or how they & existing ones can be optimised.
We should be solving the root cause of large bundle sizes before relying on compression. Compression is just an icing on the cake.
Final bundle analysis
There are way more chunks generated, in fact, it increased from 54 to 391 files in this PR (1,066 files including the gzip and Brotli files). The total static bundle size is 23.2 MB (from 19.6 MB) in this PR.
Benchmarks
Benchmarks are done under these conditions.
No throttling
JavaScript transferred (MB)
These routes were chosen because they represent the simplest, average, and most resource-heavy pages, respectively.
Time to finish (seconds)
Fast 4G network throttling
Time to finish (seconds)
Miscellaneous page loads, no throttling
These are additional data I gathered on some other pages, just to get a sense of how much the app has improved with this PR.
JavaScript transferred (MB)
Time to finish (seconds)
Bundle analysis, side-by-side